Abstract
Cyberbullying (CB) is an electronic type of bullying in which a group or a person engages in purposeful and aggressive behaviour towards another group or individual on social media platforms. It contains hate messages that are spread by social media, emails, and other means on personal or public computers, as well as personal mobile phones. The hypothesized differences between CB and traditional bullying show that CB findings from traditional bullying are insufficient. With its rising frequency, CB has had a psychological and physical impact on victims. To limit the risk in smart cities, it is critical to recognize the CB context and its applications. However, from the perspective of the cyber world, the application using CB has challenges such as a lack of awareness of aggressors and their identities, a lack of direct communication, and linking repercussions to others. Hence, this work gives a new cyberbullying detection model to automatically classify the tweets using optimized deep learning.
Introduction
Facebook, Twitter, and Instagram have all become popular online venues for individuals of all ages to engage and socialize [1]. Cyberbullying (CB) is an electronic type of bullying in which a group or a person engages in purposeful and aggressive behaviour towards another group or individual on social media platforms. Zych et al [2] asserted that individuals disseminate hate messages through social media, emails, and other methods on personal or public computers, as well as personal mobile phones. This has sparked concern among governments and is a significant threat. The hypothesized differences between CB and traditional bullying show that CB findings from traditional bullying are insufficient [3]. The characteristics of CB are both connected and unique to those of traditional bullying. With its rising frequency, CB has had a psychological and physical impact on victims. To limit the risk in smart cities, it is critical to recognize the CB context and its applications. However, from the perspective of the cyber world, the application using CB has challenges such as a lack of awareness of aggressors and their identities, a lack of direct communication, and linking repercussions to other [4].
While these platforms allow individuals to connect and interact in previously unimaginable ways, they have also given rise to harmful actions such as CB. CB is a form of psychological abuse that has a large social impact and has been on the rise, especially among young people who spend the majority of their time traveling between various social media sites. Because of their popularity and the anonymity that the Internet affords abusers, social media networks like Twitter and Facebook are particularly vulnerable to CB [5]. In India, for example, 14% of all abuse takes place on Facebook and Twitter, with 37% of these occurrences involving children. Furthermore, CB has the potential to cause major mental health problems. According to Ortega-Barón et al. [6], anxiety, sadness, stress, and social and emotional problems as a result of CB are the main causes of suicide. According to Kowalski, Limber & McCord [7], this necessitates the development of a method for detecting CB in social media posts. It's nearly impossible to manually monitor and manage CB on the Twitter network. Furthermore, mining social media posts for the identification of CB is a challenging task [8]. Twitter tweets, for example, are frequently short, full of slang, and may include emoticons and gifs, making it hard to discern people's intents and meanings only from their social media communications, , [9]. Furthermore, if the bully hides his or her bullying with sarcasm or passive aggressiveness, it might be difficult to identify [10].
Despite the difficulties that social media communications provide, CB detection on social media is a hotly debated issue. Within the Twitter network, CB detection has mostly been sought using tweet classification and, to a lesser degree, topic modelling techniques. Text categorization using supervised machine learning (ML) models is frequently used to divide tweets into bullying and non-bullying categories [11]. Zych et al [12] found that classifiers based on deep learning (DL) have also been used to divide tweets into bullying and non-bullying categories. When the class labels are immutable and irrelevant to the new events, supervised classifiers perform poorly. It may also be acceptable for a pre-determined set of events, but it will not be able to manage tweets that change on the go. Topic modelling techniques have long been used to identify the most important themes from a set of data in order to generate patterns or classes in the entire dataset [13]. According to Zych et al [14], despite the comparable premise, generic unsupervised topic models are ineffective for short texts; hence, specific unsupervised short text topic models were used. These algorithms are capable of detecting popular themes in tweets [15].
These models aid in the extraction of relevant topics by utilizing bidirectional processing. These unsupervised models, on the other hand, need considerable training to gather sufficient previous information, which is not always sufficient. Given these constraints, Wang et al. [16], an effective tweet classification strategy must be created to bridge the gap between the classifier and the topic model, allowing for much improved flexibility [17]. Researchers are now working to improve these strategies so that they can be used to solve real-world challenges. Machine learning techniques were used in many research outputs on intelligent CB identification, as well as common and psychological aspects. With the comment of a human departing the situation, these intelligent systems are mostly constrained [18]. In a previous study, Garaigordobil & Machimbarrena [19] talked about how to improve CB detection and classification by using the user context in action, which takes into account the user's traits and the history of their comments. Currently, the researcher is working on novel methods for automatically recognizing complicated real-world situations. Cyberbullying detection using optimized deep learning.
Literature Review
Some of the recent works done by different authors:
Murshed et al. [20] suggested that a new way to classify tweets could be made by combining the dolphin echolocation optimization algorithm (DEOA) with an Elman-type recurrent neural network (DEA-RNN) to find cyberbullying on Twitter. DEA was employed to minimize the training time as well as fine-tune the Elman-type RNN. This approach could cope with topic models and manage the dynamic nature of short texts for active extraction. DEA-RNN had attained a better performance rate, and the feature compatibility had minimized when the initial data was higher than the initial input. However, this approach was completely limited to the Twitter dataset.
Yuvaraj et al. [21] described an efficient classification method based on multiple features for automatically detecting cyberbullying. The main objective of this method was to recognize the texts of cyberbullying without fitting them into outsized three-dimensional spaces. Taking this limitation into consideration, a text classification engine was introduced. At first, the tweets were pre-processed to avoid background information and other noises; then, feature extraction was performed to extract the chosen features; and finally, classification was carried out with no over-fitting of the data. To process the elements of an input, a deep decision tree classifier was implemented that uses the hidden layers of DNN as its tree node. Even with better performance, this method would outperform other optimization approaches as well, as it needs to analyse the performance of high-dimensional real-time datasets considerably.
Purnamasari et al. [22] presented a cyberbullying detection method using information gain-based feature selection and support vector machine (SVM)-based classification on Twitter. The purpose of this cyberbullying detection method was to categorize the tweets that encompassed bullying. SVM could assist in obtaining the segregation hyperplane between positive and negative classes. The information gained was applied to choose the most significant features that were not appropriate for classification. Initially, the process could start with stemming, filtering, tokenizing, and term weighting, and then perform feature selection by evaluating the entropy value for all terms and classifying the selected terms. As a result, this method achieved better performance, but it would require integrated feature selection models to boost the performance of classification.
Kumar & Sachdeva [23] presented a deep convolutional neural network (DCNN) model based on a capsule network (CapsNet) with dynamic routing for detecting multimodal cyberbullying. This DCNN model could use three different modalities of social media, such as infographics, visuals, and text. To predict the content of textual bullying, the CapsNet DNN with dynamic routing was employed. Whereas CNN was employed to detect the visual bullying content. By utilizing Google Lens in the Google Photos app, the infographic content was discretized by segregating the text from an image. For multimodal learning, the perceptron-based decision-level late fusion model was employed to integrate discrete modality prediction as well as output. Even with better performance, this model would need to improve its accuracy.
Yuvaraj et al [24] suggested a nature-inspired-based model on a multi-media social networking platform to automatically classify cyberbullying. This model integrated both the classification engine and the feature selection engine from the social media engine. For cyberbullying detection, the feature extraction engine could extract the context, user comments, and psychological features, whereas the artificial neural network (ANN) could classify the outcomes, and it was delivered with an evaluation system that either penalizes or rewards the classified result. Deep reinforcement learning (RL) was used for evaluation, which enhanced the classification performances. Finally, this model achieved better outcomes but integrated other NN models for extracting the information to assist in diminishing cyberbullying.
Objectives
The major objectives of the proposal are as follows:
Proposing a new cyberbullying detection model to automatically classify the tweets based on optimized deep learning.
Performing pre-processing, feature extraction, and classification to accurately classify the tweets based on the context present in the tweets into normal and spam.
Introducing an optimized deep learning model called bidirectional COOT optimized gated recurrent unit (BiC-GRU) to accurately classify the spammers from the normal tweets and gain higher classification accuracy.
Exploring the advantage of the COOT optimization algorithm (COA) for fine-tuning the parameters of the Bi-GRU model to improve classification accuracy.
Conducting extensive simulations to compare the performance of the proposed detection model with the existing detection models to prove its efficiency.
The proposed method comprises the following phases: pre-processing, feature extraction, and classification. At the pre-processing phase, the input documents are cleaned to remove noise and unwanted symbols. The pre-processing is carried out in sub-phases such as noise removal, cleaning, and transformation. During noise removal, punctuation, emotions, hashtags, and URLs are removed. In the cleansing phase, the acronyms are expanded, the language is modified, spell checking is done, and repeated characters are removed. Finally, tokenization, stemming, and stop word removal are done in the transformation phase. From the pre-processed tweets, features such as TF-IDF, parts of speech (POS), unigram, bigram, and proper noun score are extracted.
The extracted features are then provided to the classification model for training. In the proposed work, the bidirectional coot optimized gated recurrent unit (BiC-GRU) model is used to classify the input features. The proposed classifier is an integration of a bidirectional gated recurrent unit (Bi-GRU) and a coot optimization algorithm (COA). The features are provided to the layers of Bi-GRU, where they get trained with the context present in the tweets. The COA algorithm is executed within the layers of Bi-GRU to optimally choose the parameters for training. This step enhances the training process and results in increased classification accuracy. The classifier identifies the spam from the tweets based on the content present in the tweets and classifies them into normal and spammers. Finally, evaluations are carried out to prove the performance efficacy of the proposed model against the existing models (refer to figure 1).
Expected Outcomes
The major performance metrics such as accuracy, precision, recall, f-measure, specificity, etc. will be computed and compared with the recent existing mechanisms related to cyberbullying detection.
Methodology for "An Automatic Cyberbullying Detection Model in the Twitter Social Media Platform Based on a Bidirectional Coot Optimized Gated Recurrent Unit":
In the paper "An Automatic Cyberbullying Detection Model in the Twitter Social Media Platform Based on Bidirectional Coot Optimized Gated Recurrent Unit," a number of cyberbullying detection features are taken from Twitter. These features include:
TF-IDF (Term Frequency-Inverse Document Frequency):
TF-IDF is a numerical representation that reflects the importance of a word in a document relative to a corpus of documents.
It is calculated by considering the frequency of a term within a document (TF) and its rarity in the entire corpus (IDF).
TF-IDF captures the significance of words in distinguishing between cyberbullying and non-abusive content.
Parts of Speech (POS):
POS tagging assigns grammatical labels (such as noun, verb, adjective, etc.) to words in a sentence.
Extracting POS tags provides information about the syntactic structure of the text, which can be useful in detecting cyberbullying patterns.
Certain POS tags may be more prevalent in cyberbullying instances, allowing the model to identify potential abusive language.
Unigram and Bigram Features
Unigrams refer to single words, while bigrams refer to pairs of consecutive words in a text.
By considering unigrams and bigrams as features, the model can capture both individual word usage and contextual information in the text.
Unigram and Bigram features help identify specific phrases or combinations of words that are indicative of cyberbullying.
Proper Noun Score:
Proper nouns are specific names of people, places, organizations, etc.
Assigning a proper noun score involves identifying and quantifying the presence of proper nouns in the text.
Cyberbullying instances may involve targeting individuals or organizations, and the presence of frequent proper nouns can be indicative of such behaviour.
These extracted features contribute to the overall feature set used in the automatic cyberbullying detection model. By adding these features to the Bidirectional Coot Optimized Gated Recurrent Unit (BiCo-GRU) architecture, the model can use different linguistic and statistical parts of the text to find cyberbullying on Twitter.
Methodology
"An Automatic Cyberbullying Detection Model in the Twitter Social Media Platform Based on a Bidirectional Coot Optimized Gated Recurrent Unit":
Receive the reprocessed text data, typically represented as a sequence of word embeddings.
Bidirectional Coot Optimized Gated Recurrent Unit (BiCo-GRU) Layers:
The BiCo-GRU layer consists of two parallel Gated Recurrent Unit (GRU) layers, one processing the input sequence in the forward direction and the other in the backward direction.
GRU is a type of recurrent neural network (RNN) that effectively captures sequential dependencies and contextual information.
The Coot optimization technique enhances the GRU's capability to capture long-term dependencies by modifying its update and reset gates.
The output of each GRU layer is concatenated to capture both forward and backward contextual information.
Attention Mechanism:
Apply an attention mechanism to the concatenated output of the BiCo-GRU layers.
Attention allows the model to focus on important words or phrases within the input sequence, giving them higher weights during the classification process.
The attention mechanism assigns attention weights to each input token, which are then used to compute a weighted sum of the BiCo-GRU layer outputs.
Fully Connected Layers:
Process the weighted sum from the attention mechanism through one or more fully connected layers.
Fully connected layers consist of densely connected neurons that learn non-linear representations from the input data.
Typically, an activation function (such as a ReLU or sigmoid) follows each fully connected layer to introduce non-linearity.
Output Layer:
The final fully connected layer outputs a probability distribution over the target classes (cyberbullying vs. non-abusive content).
Usually, a softmax activation function is applied to normalize the output into probabilities.
Model Training and Optimization:
The model is trained using a suitable loss function, such as cross-entropy, which measures the dissimilarity between predicted and true class labels.
The parameters of the model are optimized using backpropagation and gradient descent algorithms.
Regularization techniques like dropout or batch normalization may be applied to prevent overfitting and improve generalization.
The proposed model takes advantage of the power of the Bidirectional Coot Optimized Gated Recurrent Unit (BiCo-GRU) architecture to effectively capture sequential dependencies, context information, and long-term dependencies in text data. The attention mechanism further enhances the model's ability to focus on important words or phrases, while the fully connected layers and output layer enable the classification of cyberbullying instances. By training and optimizing this architecture on labelled data, the model can automatically detect cyberbullying on the Twitter social media platform.
Proposed Methodology
Data Collection:
Obtain a representative dataset of tweets from the Twitter social media platform.
The dataset should include both cyberbullying instances and non-abusive content to facilitate training and evaluation of the model.
Data Pre-processing:
Clean the collected tweets by removing noise, such as special characters, URLs, and emojis.
Perform tokenization to split the tweets into individual words or tokens.
Apply techniques like stemming or lemmatization to normalize the words and reduce variations.
Feature Engineering:
Convert the pre-processed text into numerical representations suitable for machine learning algorithms.
Utilize word embeddings (e.g., Word2Vec, GloVe) to capture semantic relationships between words.
Optionally, incorporate additional features such as user metadata, hashtags, or mentions to improve the model's performance.
Model Architecture:
Design and implement a Bidirectional Coot Optimized Gated Recurrent Unit (BiCo-GRU) model for cyberbullying detection.
Configure the model with appropriate parameters, including the number of layers, hidden units, and dropout regularization.
Initialize the weights of the model using pre-trained word embedding to leverage transfer learning.
Training:
Split the pre-processed dataset into training, validation, and testing sets.
Feed the training data into the BiCo-GRU model and optimize the model's parameters using backpropagation and gradient descent.
Monitor the model's performance on the validation set and apply early stopping to prevent overfitting.
Experiment with different optimization algorithms (e.g., Adam, RMSprop) and learning rates to find the best configuration.
Model Evaluation:
Evaluate the trained model on the testing set to assess its performance in cyberbullying detection.
Calculate metrics such as accuracy, precision, recall, and F1 score to measure the model's effectiveness.
Analyse the model's performance across different classes (cyberbullying vs. non-abusive content) to understand any class imbalances or biases.
Model Fine-tuning and Optimization:
Fine-tune the model based on the evaluation results and iterate through steps 4-6 until satisfactory performance is achieved.
Experiment with different hyperparameters, such as batch size, sequence length, or regularization techniques, to optimize the model further.
Deployment and Application:
Integrate the trained cyberbullying detection model into the Twitter social media platform or develop a standalone application for real-world usage.
Evaluate the model's performance in real-time scenarios and assess its effectiveness in identifying and flagging instances of cyberbullying.
Monitor the model's performance over time and collect user feedback to make continuous improvements and updates.
By following this methodology, researchers can develop an effective automatic cyberbullying detection model based on the Bidirectional Coot Optimized Gated Recurrent Unit (BiCo-GRU) for the Twitter social media platform.
Discussion
The phenomenon of cyberbullying has emerged as a prevalent concern on popular social media platforms such as Twitter, leading to detrimental consequences for individuals and adverse effects on the reputation of businesses. Researchers have been actively investigating different automated detection models in order to promptly identify and address instances of cyberbullying in real-time as a response to this challenge. The Bidirectional Coot Optimized Gated Recurrent Unit (BiCooGRU) model is a promising approach that has demonstrated considerable potential in the field of cyberbullying detection.
The BiCooGRU model is a modified version of the conventional GRU model that incorporates the Co-Training (Coot) methodology. The objective of Coot is to optimize the performance of the model by conducting training on various data sets and utilizing the unlabelled instances to enhance the classifier's decision boundary. The integration of Bidirectional Gated Recurrent Units (GRU) and Co-Training in this study enhances the model's capacity to capture temporal dependencies and contextual information within Twitter conversations. Consequently, this integration leads to improved accuracy in the detection of instances of cyberbullying [25].
In the realm of business applications, the utilization of the BiCooGRU-based automatic cyberbullying detection model presents numerous advantageous features. Primarily, it enables businesses to uphold a secure and courteous digital milieu for their clientele, workforce, and affiliates. The expeditious identification and resolution of cyberbullying incidents by businesses can effectively mitigate the risk of reputational harm and adverse publicity stemming from such occurrences.
Additionally, the model's ability to detect instances of cyberbullying in real-time facilitates timely responses, thereby enhancing the efficiency and effectiveness of crisis management strategies. Business entities possess the ability to promptly intervene and implement suitable measures, such as the blocking of malevolent accounts or the reporting of abusive content, thereby mitigating the dissemination and consequences of cyberbullying.
Moreover, the implementation of automated cyberbullying detection systems can effectively alleviate the burden on human moderators, who may face an overwhelming amount of content on social media platforms. The implementation of the BiCooGRU-based detection model enables businesses to optimize their moderation procedures, thereby enabling human moderators to allocate their attention towards tasks that require higher levels of complexity and nuance [26].
Nevertheless, despite its inherent potential, the BiCooGRU model is not exempt from encountering various challenges. The efficacy of the model is significantly contingent upon the calibre and variety of the training data. If biases within the data are not appropriately acknowledged and mitigated, they have the potential to result in inaccurate positive or negative outcomes, thereby compromising the effectiveness of the model. Furthermore, it is imperative to regularly update the model in order to adapt to the ever-changing tactics employed in cyberbullying, thereby guaranteeing its sustained efficacy [27].
In summary, the implementation of a Bidirectional Coot Optimized Gated Recurrent Unit-based automatic cyberbullying detection model exhibits significant potential for enterprises operating on social media platforms such as Twitter. Through proactive identification and mitigation of cyberbullying incidents, businesses have the ability to safeguard their brand reputation, uphold a secure online environment, and effectively handle crises. Nevertheless, it is imperative to consistently make endeavours in order to enhance the performance of the model, address biases, and ensure its alignment with the latest developments in the realm of cyberbullying [28].
Conclusion
This paper presents an innovative approach to combating cyberbullying on the Twitter social media platform through the development of an automatic detection model. By using the power of the Bidirectional Coot Optimized Gated Recurrent Unit (BiCo-GRU), the proposed model shows promise in identifying cyberbullying cases accurately. The incorporation of bidirectional modelling and COOT optimization enhances the model's ability to capture contextual information and effectively distinguish between abusive and non-abusive content. The findings of this study contribute to the growing body of research on cyberbullying detection, providing valuable insights and a practical solution to mitigate the harmful effects of online harassment. With further refinement and implementation, this model holds the potential to enhance user safety and well-being in social media environments, fostering a more inclusive and respectful online community.
Future scope
Overall, the innovative approach presented in this paper provides a strong foundation for future research endeavours aimed at combating cyberbullying on social media platforms. By addressing the aforementioned areas of improvement and exploring new avenues, researchers can contribute to the development of more sophisticated and effective models, interventions, and strategies, ultimately fostering safer and more respectful online communities.
Enhancing Model Performance: Although the proposed model, based on Bidirectional Coot Optimized Gated Recurrent Unit (BiCo-GRU), demonstrates promising results, there is room for improvement. Future research can focus on optimizing the model architecture, exploring alternative deep learning techniques, or incorporating additional features or embeddings to further enhance the accuracy and robustness of cyberbullying detection.
Multilingual Support: As social media platforms are used globally, extending the model's capabilities to detect cyberbullying in multiple languages is crucial. Further research can be conducted to develop language-agnostic or multilingual models that can effectively identify abusive content across different languages, enabling a more comprehensive approach to tackling cyberbullying on Twitter and other social media platforms.
Conflict Of Interest
The author declares that they have no conflict of interests.
Acknowledgement
The author is thankful to the institutional authority for completion of the work.
References
- Chun J, Lee J, Kim J, Lee S. An international systematic review of cyberbullying measurements. Computers in human behavior. 2020 Dec 1;113:106485. https://doi.org/10.1016/j.chb.2020.106485
- Zych I, Baldry AC, Farrington DP, Llorent VJ. Are children involved in cyberbullying low on empathy? A systematic review and meta-analysis of research on empathy versus different cyberbullying roles. Aggression and violent behavior. 2019 Mar 1;45:83-97. https://doi.org/10.1016/j.avb.2018.03.004
- Gaffney H, Farrington DP, Espelage DL, Ttofi MM. Are cyberbullying intervention and prevention programs effective? A systematic and meta-analytical review. Aggression and violent behavior. 2019 Mar 1;45:134-53. https://doi.org/10.1016/j.avb.2018.07.002
- Craig W, Boniel-Nissim M, King N, Walsh SD, Boer M, Donnelly PD, Harel-Fisch Y, Malinowska-Cieślik M, de Matos MG, Cosma A, Van den Eijnden R. Social media use and cyber-bullying: A cross-national analysis of young people in 42 countries. Journal of Adolescent Health. 2020 Jun 1;66(6):S100-8. https://doi.org/10.1016/j.jadohealth.2020.03.006
- Jadambaa A, Thomas HJ, Scott JG, Graves N, Brain D, Pacella R. Prevalence of traditional bullying and cyberbullying among children and adolescents in Australia: A systematic review and meta-analysis. Australian & New Zealand Journal of Psychiatry. 2019 Sep;53(9):878-88. https://doi.org/10.1177/0004867419846393
- Ortega-Barón J, Buelga S, Ayllón E, Martínez-Ferrer B, Cava MJ. Effects of intervention program Prev@ cib on traditional bullying and cyberbullying. International journal of environmental research and public health. 2019 Feb;16(4):527. https://doi.org/10.3390/ijerph16040527
- Kowalski RM, Limber SP, McCord A. A developmental approach to cyberbullying: Prevalence and protective factors. Aggression and violent behavior. 2019 Mar 1;45:20-32. https://doi.org/10.1016/j.avb.2018.02.009
- Hinduja S, Patchin JW. Connecting adolescent suicide to the severity of bullying and cyberbullying. Journal of school violence. 2019 Jul 3;18(3):333-46. https://doi.org/10.1080/15388220.2018.1492417
- Balakrishnan V, Khan S, Arabnia HR. Improving cyberbullying detection using Twitter users’ psychological features and machine learning. Computers & Security. 2020 Mar 1;90:101710. https://doi.org/10.1016/j.cose.2019.101710
- Abaido GM. Cyberbullying on social media platforms among university students in the United Arab Emirates. International journal of adolescence and youth. 2020 Dec 31;25(1):407-20. https://doi.org/10.1080/02673843.2019.1669059
- Van Ouytsel J, Lu Y, Ponnet K, Walrave M, Temple JR. Longitudinal associations between sexting, cyberbullying, and bullying among adolescents: Cross-lagged panel analysis. Journal of adolescence. 2019 Jun 1;73:36-41. https://doi.org/10.1016/j.adolescence.2019.03.008
- Zych I, Farrington DP, Ttofi MM. Protective factors against bullying and cyberbullying: A systematic review of meta-analyses. Aggression and violent behavior. 2019 Mar 1;45:4-19. https://doi.org/10.1016/j.avb.2018.06.008
- Charalampous K, Demetriou C, Tricha L, Ioannou M, Georgiou S, Nikiforou M, Stavrinides P. The effect of parental style on bullying and cyber bullying behaviors and the mediating role of peer attachment relationships: A longitudinal study. Journal of adolescence. 2018 Apr 1;64:109-23. https://doi.org/10.1016/j.adolescence.2018.02.003
- Zych I, Beltrán-Catalán M, Ortega-Ruiz R, Llorent VJ. Social and emotional competencies in adolescents involved in different bullying and cyberbullying roles. Revista de Psicodidáctica (English ed.). 2018 Jul 1;23(2):86-93. https://doi.org/10.1016/j.psicoe.2017.12.001
- Ding Y, Li D, Li X, Xiao J, Zhang H, Wang Y. Profiles of adolescent traditional and cyber bullying and victimization: The role of demographic, individual, family, school, and peer factors. Computers in Human Behavior. 2020 Oct 1;111:106439. https://doi.org/10.1016/j.chb.2020.106439
- Wang MJ, Yogeeswaran K, Andrews NP, Hawi DR, Sibley CG. How common is cyberbullying among adults? Exploring gender, ethnic, and age differences in the prevalence of cyberbullying. Cyberpsychology, Behavior, and Social Networking. 2019 Nov 1;22(11):736-41. https://doi.org/10.1089/cyber.2019.0146
- Lozano-Blasco R, Cortés-Pascual A, Latorre-Martínez MP. Being a cybervictim and a cyberbully–The duality of cyberbullying: A meta-analysis. Computers in human behavior. 2020 Oct 1;111:106444. https://doi.org/10.1016/j.chb.2020.106444
- Bork-Hüffer T, Mahlknecht B, Kaufmann K. (Cyber) Bullying in schools–when bullying stretches across cON/FFlating spaces. Children's Geographies. 2021 Mar 4;19(2):241-53. https://doi.org/10.1080/14733285.2020.1784850
- Garaigordobil M, Machimbarrena JM. Victimization and perpetration of bullying/cyberbullying: Connections with emotional and behavioral problems and childhood stress. Psychosocial Intervention. 2019;28(2):67-73. https://doi.org/10.5093/pi2019a3
- Murshed BA, Abawajy J, Mallappa S, Saif MA, Al-Ariki HD. DEA-RNN: A hybrid deep learning approach for cyberbullying detection in Twitter social media platform. IEEE Access. 2022 Feb 23;10:25857-71. https://doi.org/10.1109/access.2022.3153675
- Yuvaraj N, Chang V, Gobinathan B, Pinagapani A, Kannan S, Dhiman G, Rajan AR. Automatic detection of cyberbullying using multi-feature based artificial intelligence with deep decision tree classification. Computers & Electrical Engineering. 2021 Jun 1;92:107186. https://doi.org/10.1016/j.compeleceng.2021.107186
- Purnamasari NM, Fauzi MA, Indriati LS, Dewi LS. Cyberbullying identification in twitter using support vector machine and information gain based feature selection. Indonesian Journal of Electrical Engineering and Computer Science. 2020 Jun;18(3):1494-500. https://doi.org/10.11591/ijeecs.v18.i3.pp1494-1500
- Kumar A, Sachdeva N. Multimodal cyberbullying detection using capsule network with dynamic routing and deep convolutional neural network. Multimedia Systems. 2021 Feb 2:1-0. https://doi.org/10.1007/s00530-020-00747-5
- Yuvaraj N, Srihari K, Dhiman G, Somasundaram K, Sharma A, Rajeskannan SM, Soni M, Gaba GS, AlZain MA, Masud M. Nature-inspired-based approach for automated cyberbullying classification on multimedia social networking. Mathematical Problems in Engineering. 2021 Feb 22;2021:1-2. https://doi.org/10.1155/2021/6644652
- Chen W, Cheng L, Chang Z, Wen B, Li P. Wind turbine blade icing detection using a novel bidirectional gated recurrent unit with temporal pattern attention and improved coot optimization algorithm. Measurement Science and Technology. 2022 Oct 20;34(1):014004. https://doi.org/10.1088/1361-6501/ac8db1
- Jaiswal M, Basha AM. The Influence of Social Media Platform on Purchase Intention and Consumer Decision-Making: Post Covid-19. Recent Advancements in commerce and management. 2022 Aug 30:222. https://doi.org/10.5281/zenodo.710951
- Ganca SS, Kyobe M. The Effectiveness of School Anti-cyberbullying Policies and Their Compliance with South African Laws: A Conceptual Framework. InInternational Development Informatics Association Conference 2022 Nov 23 (pp. 234-248). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-28472-4_15
- Beltrán Catalán M, Zych I, Ortega Ruiz R, Llorent García VJ. Victimisation through bullying and cyberbullying: Emotional intelligence, severity of victimisation and technology use in different types of victims. Psicothema. 2018. https://doi.org/10.7334/psicothema2017.313