Skip to main content Accessibility help
×
Hostname: page-component-54dcc4c588-42vt5 Total loading time: 0 Render date: 2025-10-04T15:28:32.682Z Has data issue: false hasContentIssue false

7 - AI for Human and Misinformation Interactions

A Case of Social Media

Published online by Cambridge University Press:  19 September 2025

Dan Wu
Affiliation:
Wuhan University, China
Shaobo Liang
Affiliation:
Wuhan University, China

Summary

Misinformation on social media is a recognized threat to societies. Research has shown that social media users play an important role in the spread of misinformation. It is crucial to understand how misinformation affects user online interaction behavior and the factors that contribute to it. In this study, we employ an AI deep learning model to analyze emotions in user online social media conversations about misinformation during the COVID-19 pandemic. We further apply the Stimuli–Organism–Response framework to examine the relationship between the presence of misinformation, emotions, and social bonding behavior. Our findings highlight the usefulness of AI deep learning models to analyze emotions in social media posts and enhance the understanding of online social bonding behavior around health-related misinformation.

Information

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025

7 AI for Human and Misinformation Interactions A Case of Social Media

7.1 Introduction

The widespread use of the internet and social networks has provided a platform for social media users to create social bonds irrespective of where they are physically located. It also acts as a breeding ground for misinformation propagation online. Research shows that social media users play a central role in spreading this misinformation online by falsely believing it (Vosoughi et al., Reference Vosoughi, Roy and Aral2018). Because of social distancing and restrictions, many social media users relied on social networking to obtain information during the early stages of COVID-19. Lack of knowledge about this new virus also facilitated the related misinformation to spread over the internet, especially when there were limited scientific studies available. Misinformation has caused harm to the general public during the pandemic. For example, Forati and Ghose (Reference Forati and Ghose2021) revealed that locations with posted high numbers of misinformation later experienced a rise in the number of cases. Studies have also found that those with beliefs in conspiracies about the virus were more likely to reject expert, authoritative information (Uscinski et al., Reference Uscinski, Enders, Klofstad, Seelig, Funchion, Everett, Wuchty, Premaratne and Murthi2020) and had a higher hesitancy against the vaccine (Freeman et al., Reference Freeman, Waite, Rosebrock, Petit, Causier, East, Jenner, Teale, Carr, Mulhall, Bold and Lambe2022). Allington et al. (Reference Allington, Duffy, Wessely, Dhavan and Rubin2021) also revealed a negative relationship between COVID-19 conspiracy beliefs and health-protective behaviours, and such behaviours may have been caused by using social media as a source of information. Therefore, understanding human behaviour around health-related misinformation is crucial to combat relevant misinformation online.

Significant computational studies have concentrated on misinformation identification and mitigation to help control the spread of misinformation online. Despite the evidence showing that social media users play an important role in the spread of misinformation on social media (Vosoughi et al., Reference Vosoughi, Roy and Aral2018), limited studies have focused on examining how users interact with misinformation socially and what factors drive these social behaviours. By expressing opinions, social media users find other like-minded individuals online and create social bonds in a virtual environment. Through the appraisal framework (Martin & White, Reference Martin and White2003) studies such as Inwood and Zappavigna (Reference Inwood and Zappavigna2021) have modeled social bonding behaviour in a virtual setting using the ambient affiliation social semiotic approach by leveraging fine-grained linguistic patterns in writing. Understanding social bonding behaviour is crucial for identifying the root cause of the spread of misinformation. Users are the key players in misinformation spreading by falsely believing the misinformation. In Zhou et al. (Reference Zhou, Shu, Phoha, Liu and Zafarani2022) the authors investigated the spread of misinformation behaviour from a social media user’s intentional perspective. They found that users may have unintentionally shared misinformation on social networking sites. Using such signals and incorporating them into the model design improved the performance of their misinformation identification model. This result indicates that incorporating human information interaction signals from a social perspective can enhance model design and performance. Therefore, leveraging social bonding behaviour may aid future research in developing more effective systems to combat the spread of misinformation. This chapter investigates two research questions:

(1) How do social media users bond around misinformation?

(2) What is the relationship between emotions and social bonding behaviour?

We adopt the Stimuli–Organism–Response (SOR) framework (Mehrabian & Russell, Reference Mehrabian and Russell1974) to explore online social bonding behaviour surrounding misinformation spread through the lens of emotions. We view information exposure as stimuli, social media users’ emotions as an organism, and social bonding behaviour as a response. Using a public COVID-19 dataset – CoAID (Cui & Lee, Reference Cui and Lee2020) from X, formerly known as Twitter – we explicitly explored online social bonding behaviour around COVID-19 misinformation in our study. We analyzed conversations from X to characterize social connections and attitudes toward misinformation. We randomly sampled 330 X conversations using CoAID (Cui & Lee, Reference Cui and Lee2020) and manually annotated the social bonding behaviour expressed in the reply posts. Deepmoji (Felbo et al., Reference Felbo, Mislove, Søgaard, Rahwan and Lehmann2017) and the Multidimensional Lexicon of Emojis (MLE) (Godard & Holtzman, Reference Godard and Holtzman2022) were applied to extract emotion intensities from the reply posts. More details about emotion intensity extraction are covered in Section 7.3.4. Once emotion intensities were computed, we investigated how they differed for varying levels of information veracity. This is followed by quantifying how these emotions contribute to users’ attitudes toward the information mentioned in the source post. The attitude intensity of an individual decides how the social bond is formed and leads to different online social bonding behaviours.

Similar to Vosoughi et al. (Reference Vosoughi, Roy and Aral2018), who found that emotion and the intensities of emotion among different information veracities are different, we found that social bonding behaviours are different around true information and misinformation. Further analysis in our study showed that, for misinformation, the emotion of surprise significantly contributes to supporting the misinformation. Anticipation, on the other hand, significantly contributes to rejecting the misinformation. In contrast, when users are exposed to true information, the emotions of anger and sadness significantly contribute to its support, unlike fear, which significantly contributes to rejecting true information.

7.2 Literature Review

7.2.1 Misinformation and Human–Information Interaction

Studies such as Soroya et al. (Reference Soroya, Farooq, Mahmood, Isoaho and Zara2021) and Ke et al. (Reference Ke, Du and Ji2021) examined how social media users interacted with information during the COVID-19 event. They examined how information influences user behaviour, from seeking to avoiding information. Humans are prone to seeking knowledge when faced with uncertainties. However, it also has the drawback of making individuals feel overwhelmed by information – information overload – which makes them try to avoid it. Significant efforts have also been put into studying human–information interaction around misinformation. Vosoughi et al. (Reference Vosoughi, Roy and Aral2018) discuss the spread of true and false information online from various perspectives. They discovered that false information spreads faster, deeper, and more widely on online social networks. They also found that misinformation is more novel compared to the truth. The novelty of misinformation could be the reason why it receives more attention from users and is shared more often. Weeks (Reference Weeks2015) evaluated misperceptions of misinformation and how they affect users’ judgments of misinformation from various elements. The author found that emotion, partisanship, and correction play an important role in affecting judgment accuracy in misinformation. However, in the study, the emotions are limited to anger and anxiety. This limitation is due to previous studies finding that these negative emotions contribute to human misperceptions.

7.2.2 Emotions and User Behaviours

Human behaviours are affected by emotions, consciously or unconsciously, and numerous studies have examined the effects of emotions on various user behaviours. For example, Dunn and Schweitzer (Reference Dunn and Schweitzer2005) examined different emotional states and how they affect an individual’s trust. They found that positive emotions such as happiness increase the likelihood of an individual trusting an event, whereas negative emotions such as anger have the opposite impact. Moving this to an online setting, Wang et al. (Reference Wang, Seng-cho and Chang2009) found that positive emotions have a greater impact than negative emotions when it comes to perceived belief in internet applications.

It is widely acknowledged that the inclusion of emotion as a feature in user behaviour prediction computation models is beneficial. Many studies have discovered that the design of models containing information about emotions improves model accuracy (Calvo & D’Mello, Reference Calvo and D’Mello2010; Pak & Paroubek, Reference Pak and Paroubek2010). The improved effectiveness of the models also demonstrated that emotion is a powerful predictor of user behaviour. For instance, Chen et al. (Reference Chen, Liu and Zou2017) showed that their model benefits from incorporating the emotion signal when modeling the behaviour of users who repost content. User emotions have also been adapted in false information modeling to improve modeling performance (Li et al., Reference Li, Feng and Zhang2016).

7.2.3 User Behaviours based on the SOR Framework

The Stimulus–Organism–Reaction (S-O-R) framework (Mehrabian & Russell., Reference Mehrabian and Russell1974) in psychology aids in understanding human behaviour and its underlying causes. Therefore, it is frequently used when the goal of a study is to identify the main factors influencing a particular behaviour. It has been heavily used in studying customer behaviour (Arora et al., Reference Arora, Parida and Sahney2020; Gatautis et al., Reference Gatautis, Vitkauskaite, Gadeikiene and Piligrimiene2016). For example, using the SOR framework, Sherman et al. (Reference Sherman, Mathur and Smith1997) explained how the store environment affects the emotions of customers and how that leads to different purchasing behaviours, whereas Slama and Tashchian (Reference Slama and Tashchian1987) did a case study on how consumer involvement would affect shampoo purchasing behaviour as the response.

Although it is traditionally used to study consumer behaviour, in recent years, information studies have also applied this framework to unpack the information behaviour of social media users. Soroya et al. (Reference Soroya, Farooq, Mahmood, Isoaho and Zara2021) studied how users move from information seeking to information avoidance online. Using the SOR framework, they found that information avoidance behaviour was caused by information seeking, as the stimulus drives users to experience information overload. From these affected users’ internal states of information anxiety as the organism and finally becoming information avoidance as the outcome, Xiao and Su (Reference Xiao and Su2022) revealed a relationship between users who were incidentally exposed to misinformation and had different misperceptions as organisms and ended up sharing the misinformation as a result of their misperceptions.

The SOR framework is also widely used to investigate COVID-19-related health misinformation. For instance, Li et al. (Reference Li, Chen and Rao2022) used this framework to explain how the COVID-19 risk, as a stimulus for individuals, affects their emotions as an organism and leads to different responses in terms of sharing COVID-19-related misinformation online. The framework was used by Wu (Reference Wu2022) to demonstrate how an individual’s social and information dependency leads to positive and negative effects on them. They found the positive effect has a greater influence on users’ sharing of misinformation on social media platforms as the outcome. Similarly, Sampat and Raj (Reference Sampat and Raj2022) demonstrated, based on the COVID-19 event, how many personal characteristics influence information sharing behaviour on social media.

7.3 Methodology and Data

7.3.1 Dialogic Social Affiliation Text

Limited attention has been paid to how values are communicated and how bonds are formed on social media from a social behaviour perspective. Introduced by Knight (Reference Knight2008; Reference Knight, Bednarek and Martin2010; Reference Knight2013) and summarized in the ambient affiliation framework (Inwood & Zappavigna, Reference Inwood and Zappavigna2021), the dialogic affiliation system is designed to evaluate how the recipient negotiates the bond introduced in the initial communication. This system first identifies the bond from a linguistic standpoint using the ideation and attitude pairs and then observes how the bonds communicate in social networks. To identify the attitude, the system leverages the appraisal framework (Martin & White, Reference Martin and White2003). The appraisal framework (Martin & White, Reference Martin and White2003), a semiotic social framework, is a tool that helps understand an individual’s attitude and appreciation toward others through written messages. The appraisal framework contains three dimensions of evaluation, including attitude, graduation, and engagement. Since the social bonding behaviour is what we are focusing on rather than the actual bond itself, in our study, therefore, upon identifying bonds in the source post, we identify the attitude only from the appraisal framework. The attitude dimension includes effect (emotions felt, e.g., happy, sad, interested), judgment (assessing an individual’s behaviour, e.g., right, wrong, irresponsible), and appreciation (valuing an object, e.g., yummy, nice, fantastic). Applying this framework to social media platform conversations helps identify the underlying bonds in the posts. Using the dialogic affiliation system targeted to evaluate social bond negotiation at an online conversation level, we can also understand how an individual would bond with the information. Figure 7.1 provides a graphic representation of how the dialogic affiliation system functions. Under this framework, a user manages or ignores the bond that was indicated in the source post by dividing their bonding behaviour into the first and second levels. If the user is managing the bond, it can be further divided into accepting or rejecting the proposed bond. Finally, these behaviors can be classified into four groups based on how users negotiate social bonds in an online setting. We apply the framework in our study to analyze social media users’ behaviour and how bonds are formed around misinformation.

A flowchart of the dialogic affiliation system presents the flow from the manager. See long description.

Figure 7.1 Dialogic affiliation system

Figure 7.1Long description

The manager leads to support and reject. Support directs to warrant equals strongly agree, and defer equals somewhat agree plus alternative ideations. Reject directs to oppose equals somewhat disagree plus alternative ideations, and dismiss equals strongly disagree. Ignore directs to ignore equals neutral with or without alternative ideations.

According to the social bond theory (Pratt et al., Reference Pratt, Gau and Franklin2011), one of the factors determining how strongly one will bond with the environment or other individuals is belief – that is, the value and attitude of an individual. In an online environment, this could refer to whether the information aligns with their values and whether they agree or disagree with the information. From there, users decide whether to socially bond with the information. In other words, how an individual would bond could also be determined by how much the information aligns with their values, as reflected by their attitude expressed through writing. It also acts as an indication of whether they agree with the information presented or not. In our study, we are using direct mapping of social bonding behaviour and attitude intensity, as shown in Table 7.1. It reflects whether users agree with the provided information through their social behaviour.

Table 7.1Mapping between online social bonding behaviors and attitude intensity
A table lists the social bonding behavior and attitude intensity. See long description.
Table 7.1Long description

The table has three columns titled social bonding behavior, attitude intensity, and blank.

Row 1 reads. Support warrant. Strongly agree. 2.

Row 2 reads. Support defer. Somewhat agree. 1.

Row 3 reads. Ignore. Neutral. 0.

Row 4 reads. Reject oppose. Somewhat disagree. Minus 1.

Row 5 reads. Reject dismiss. Strongly disagree. Minus 2.

7.3.2 SOR Model for Information Veracity and Social Bonding Behaviors

The Stimulus–Organism–Reaction (S-O-R) framework is frequently used to understand and explain human behaviors in response to different cues in the environment and how they are triggered by different external factors (Arora et al., Reference Arora, Parida and Sahney2020; Gatautis et al., Reference Gatautis, Vitkauskaite, Gadeikiene and Piligrimiene2016; Sherman et al., Reference Sherman, Mathur and Smith1997; Slama & Tashchian, Reference Slama and Tashchian1987). It is also widely used to explain how individuals react to false information and how they use information about the recent COVID-19 pandemic (Li et al., Reference Li, Chen and Rao2022; Sampat & Raj, Reference Sampat and Raj2022; Slama & Tashchian, Reference Slama and Tashchian1987; Soroya et al., Reference Soroya, Farooq, Mahmood, Isoaho and Zara2021; Wu, Reference Wu2022; Xiao & Su, Reference Xiao and Su2022). It has been proven to be a useful tool to describe results caused by individuals’ internal states. The SOR framework has supported and directed numerous studies on human behaviour about information and health-related misinformation. We therefore apply it in our study to understand what social bonding behaviors are presented and the underlying causes for them.

Misinformation, according to earlier research, significantly affects users’ emotions (Vosoughi et al., Reference Vosoughi, Roy and Aral2018; Wang et al., Reference Wang, McKee, Torbica and Stuckler2019). Therefore, information veracity was included and acted as a stimulus in our study. Information veracity as the stimuli affect users’ internal states by triggering different emotional intensities in them. Hence, applying the model, the emotion intensities captured are the organisms users reveal when facing different information realities. Existing literature has revealed that emotions have an impact on their associated behaviors (Dunn & Schweitzer, Reference Dunn and Schweitzer2005; Wang et al., Reference Wang, Seng-cho and Chang2009). Therefore, users’ online social bonding behaviors would be affected by their emotions. In summary, applying the SOR framework to our study (Figure 7.2), the external stimuli are the veracity of the source information. The emotion experienced by users is that of the organism as it mirrors internal human states. We used the chi-square test of independence to determine whether there is a relationship between information’s veracity and how users negotiate social bonds online. The null hypothesis of the test is that there is no relationship between information’s veracity and how users develop social bonds online. The alternative hypothesis is that the veracity of information and how users negotiate bonds are not independent, meaning that, for different information veracity, users negotiate bonds differently in an online situation. This helps answer when the stimuli are different and whether the responses differ as well. It helps lay the groundwork before we investigate further.

A flowchart of the conceptual SOR model presents the flow from the stimuli or information veracity to the organism or emotions, which leads to response or online social bonding types. See long description.

Figure 7.2 The conceptual SOR model

Figure 7.2Long description

Stimuli or information veracity consists of a source tweet with true information and a source tweet with false information. It leads to organism or emotions, which list anger, anticipation, joy, trust, fear, surprise, sadness, and disgust. They lead to response or online social bonding types, where attitude intensity leads to support warrant, support defer, ignore, reject oppose, and reject dismiss.

7.3.3 Data

After reading the truth or the misinformation, users may develop different affections and emotions toward the presented information. Users would then react and respond to their feelings when posting their opinions on various topics online. This behaviour can be understood as a user’s attempt to negotiate social bonds in an online environment.

We used the COVID-19 related misinformation dataset – CoAID (Cui & Lee, Reference Cui and Lee2020) – to study how users negotiate social bonds around COVID-19 misinformation. This is a dataset containing COVID-19 healthcare misinformation together with users’ social engagement with the misinformation. COVID-19 is one of the major events around the globe in recent years. Using a relevant dataset (Cui & Lee, Reference Cui and Lee2020) with social engagement containing a significant number of online conversations allows us to investigate how users communicate bonds around misinformation.

The dataset contains over 8,000 pairs of conversations on X through replying to a source post related to misinformation and over 127,000 pairs related to true information. The dataset provided the source and reply post IDs, so we could obtain the conversations using the IDs. The dataset also contains labels for whether the information mentioned in the source post is true or false, together with labels for whether it is a claim or news that the original post mentions.

Source posts containing facts are mostly official accounts with announcements or information URLs. There are two major types of accounts posting misinformation: private accounts and posts combating misinformation from official accounts or users from the fact-checking community, clarifying that the underlying message is false. The reply posts are generally from private accounts expressing their opinions on the topic discussed in the source post. Examples of truth and misinformation source posts and their replies can be found in Figure 7.3. The source and replies were obtained using the official X API using the provided post IDs. We then filtered out any non-English conversations and randomly selected 175 conversations related to true information and 175 conversations related to misinformation. Lastly, we performed annotation using the dialogic affiliation framework, as shown in Figure 7.1, on the selected subset of X conversations.

Two screenshots of a post present the whole screen in A and the lower half of the screen in B. See long description.

Figure 7.3 Example source and reply posts

Figure 7.3Long description

Part A. The post reads the N 95 respirator masks that healthcare workers need to protect themselves while treating coronavirus patients are in dangerously short supply. But now, Duke University researchers have developed a method to clean them so they can be safely re-worn. A photo of a mask is below. The tweet was posted at 9:31 PM, on March 28, 2020. It has 1861 retweets, 162 quote tweets, and 5090 likes. The comments section has a comment that reads, Better than the nothing Trump is supporting. The reply option is near. Part B. The lower half of the post reads, Belgium health minister bans non-essential sexual activities of persons 3 or greater in indoor areas. A link is below. The comments section titled Tweet Your Reply has a comment that reads, While Boris just politely requests that Brits don't. When is he going to ban it? Followed by three emojis with happy tears. The reply option is near.

The annotations were completed by three individuals; one is the author of this chapter, and the other two are independent researchers. Guidelines were provided to assist annotators with their annotations. They were instructed to first identify bonds using the ideation and attitude pairs in the source post. This is leveraging the appraisal framework (Martin & White, Reference Martin and White2003), as in Inwood and Zappavigna (Reference Inwood and Zappavigna2021). After bonds were identified, they read the replies and used the dialogic affiliation framework to label how users negotiated bonds. Some examples of annotation are provided in Table 7.2.

Table 7.2Examples for annotation of social bonding
Source PostReplySocial Bonding
Misinformation
More funny math from @USER: (1) All non-US countries have done about 25 million tests; (2) US: about 5.9M (through Apr. 28); (3) Trump: “We’ve tested more than every country COMBINED”More lies from our Lying King.Support warrant
Royal Palace confirms Queen Elizabeth tests positive for coronavirus – UCR World News – Will oil prices fall again next week? The price $20 would be seen? I opened buy at $23.14 and didn’t stop loss in time yesterday thanks to busy works [Face with Tears of Joy]Let’s see what news comes till tomorrow [Grinning Face with Smiling Eyes]Support defer
CDC recommends men shave their beards to protect against coronavirusDon’t worry about Coronavirus, put a lime in itIgnore
CDC recommends men shave their beards to protect against coronavirusNo way. I would suit up insteadReject oppose
First volunteer in UK coronavirus vaccine trial has diedFake newsReject dismiss
Truthful information
The UK is supporting Somali government to set up hand washing stations to prevent the spread of Coronavirus. The first went operational in Lower Shabelle not far from Mogadishu yesterday with more to followJob well doneSupport warrant
If the Government wants to help kids with their education during this crisis, instead of pushing to send them back to unsafe schools, he could start by ensuring all children have the resources – like books and computers – that they need to learn at homeWhy doesn’t the @USER donate their educational equipment lol [Face with Tears of Joy][Rolling on the Floor Laughing]Support defer
We know what to do but clearly aren’t all doing it: wash hands, social distance, mask when you can’t, stay home if you are sick@USER The test are rigged for positive results. InvestigateIgnore
Experts call on UK to not use contact tracing app for surveillance – Business Insider @URL@USER This would be madness don’t take it out on the public …Reject oppose
Worst nightmare”: Fauci warns that coronavirus pandemic ‘isn’t over yet@USER This guy changes his mind more than the weather!Reject dismiss

The ideation and attitude pairs are in bold. @USER and @URL indicate user mention and external link, and [] represents an emoji.

Once the annotators finished the annotation, a majority vote was taken among the three sets of annotations. We discarded any sample without a majority vote. This might be due to the conflicting interpretations of the post’s message (Clark, Reference Clark1985; Day & Gentner, Reference Day and Gentner2007), which would make coming to an agreement or comprehending its contents challenging. After removing instances without a majority vote, we had 330 annotations left, which were made up of 162 truths and 168 misinformation-related conversations.

7.3.4 Emotion Intensity

To understand how users feel after reading the information, we can evaluate the emotions extracted from their written text. By analyzing the emotion scores across various online social bonding types when exposed to truth and misinformation, we can learn what emotions might potentially affect a user’s reaction to bonds.

Prior studies have concentrated on getting word counts for Plutchik’s eight basic emotions (Plutchik, Reference Plutchik, Plutchik and Kellerman1980) using the NRC Word-Emotion Association Lexicon (Mohammad & Turney, Reference Mohammad and Turney2013) to measure the emotions in text. One of the disadvantages of this method is that it is bound by vocabulary and relies on exact word matching to extract meaning. It also failed to consider the use of negation to generate contradictory emotions. Numerous prior studies have discovered that general misinformation induces different types of negative emotions when presented to users (Vosoughi et al., Reference Vosoughi, Roy and Aral2018). It has been demonstrated that false information about COVID-19 also causes negative emotions (Leng et al., Reference Leng, Zhai, Sun, Wu, Selzer, Strover, Zhang, Chen and Ding2021). However, when we used NRC (Mohammad & Turney, Reference Mohammad and Turney2013) to assess emotions in the chosen dataset, we discovered that it did not work well at capturing emotion intensity in the chosen dataset.

The ability of emojis to convey emotions in written text, on the other hand, is well recognized. According to Gülşen (Reference Gülşen, Ogata and Akimoto2016), emojis can be used to represent emotions. In contrast to words, emojis, which are symbols that represent faces, can indicate one or many emotions in contrast to words (Jaeger & Ares, Reference Jaeger and Ares2017). When compared to regular text usage, Ai et al. (Reference Ai, Lu, Liu, Wang, Huang and Mei2017) show that it has a greater semantic meaning. Emojis have also been suggested for use in psychometric scales to gauge emotions and personalities in several psychology studies, such as Marengo et al. (Reference Marengo, Giannotta and Settanni2017) and Phan et al. (Reference Phan, Amrhein, Rounds and Lewis2019), with promising results.

Therefore, to overcome the disadvantage of using exact word matching, we decided to first project the text message to a space represented by an emoji by applying Deepmoji (Felbo et al., Reference Felbo, Mislove, Søgaard, Rahwan and Lehmann2017). This is a neural model that projects text into five distinct emojis, which are used to represent underlying emotions more comprehensively. To look up the emotional intensity of each of the emoji representations, we use the Multidimensional Lexicon of Emojis (MLE) (Godard & Holtzman, Reference Godard and Holtzman2022). MLE comes with emotion intensity scores for 359 emojis. The emotion intensity score from MLE is calculated using over 3 million inputs. X posts and emotion ratings were provided by 2,230 individual raters. This allows direct mapping of emoji content to the intensity of Plutchik’s eight emotions (Plutchik, Reference Plutchik, Plutchik and Kellerman1980). We use MLE to map the emojis with their corresponding emotion scores after projecting the emotions into the emoji space. For each of the eight emotions, we add the scores for the five anticipated emojis and their corresponding scores. The full process of mapping the emotions and obtaining the final emotion scores is presented in Figure 7.4.

A flow diagram presents the process of obtaining emotion intensity scores. Tweets lead to smiley emojis with different emotions, which follow MLE to reach the layer that reads, anger, disgust, joy, surprise, anticipation, fear, sadness, and trust.

Figure 7.4 The overall process of obtaining emotion intensity scores

The Kolmogorov-Smirnov (K-S) test is used to analyze the distribution of emotions related to the veracity of the information. When combined with the mean and standard deviation of the emotion score, this helps to confirm whether there are any variances in the eight emotions’ levels of intensity when information veracity varies. The result from the K-S test helps us answer whether the stimuli cause variance in the organism (i.e., under different information veracity, whether the emotion intensities would be different).

To determine if emotions have different impacts on social bonding types when information veracity varies, ordinary least squares regression is used to model such a relationship. Using a linear combination of the emotion intensity scores, we can estimate the users’ likelihood to believe and create social bonds with the provided information. We can also quantify the potential effect each emotion has on how likely they believe and how likely they are to bond with the source post. The output of the model is used as an indication of which emotion contributes the most to how users would like to bond based on whether they believe the information or not.

As mentioned in Section 7.3.2, based on the theory of social bonding (Pratt et al., Reference Pratt, Gau and Franklin2011), where alignment with personal value can determine how much a user would like to bond, we mapped the online social bonding type to a numerical value indicating the users’ attitude, as shown in Table 7.1. This is to make the attitude of a user and how one would like to bond a quantifiable, measurable variable. Once the attitude intensity is quantifiable, then it is feasible to use the variable to model the relationship between the emotion intensity scores and how they contribute to how a user would like to bond. This is to answer the last question of our conceptual SOR model: to examine the relationship between the organism (emotion) and the response (user bonding types).

7.4 Results

7.4.1 Social Bonding Type Distribution

Using our annotated data, we can observe from Figure 7.5 that, when true information is presented to users, they tend to support the proposed bond by warranting or deferring to an alternative bond. In contrast, when misinformation is presented to users, they tend to reject the bond by dismissing or opposing it and suggesting an alternative bond. We can see that dismissal is the most common type of online social bonding when misinformation is presented to users. More users are ignoring the proposed bond when true information is presented. From the distribution, we believe that the veracity of the information affects how users negotiate bonds in an online environment. Further analyses are done in Sections 7.4.27.4.4 to validate that this is the case.

A stacked bar graph plots bars for false and true in support warrant, support defer, reject dismiss, reject oppose, and ignore. See long description.

Figure 7.5 Online social bonding behaviours distribution by information veracity

Figure 7.5Long description

The graph plots bars for false and true. The vertical axis ranges from 0 to 100. The horizontal axis represents online social bonding types. The values are 40 and 42 for the support warrant. 15 and 40 for support defer. 75 and 40 for reject dismiss. 20 and 45 for reject oppose. 15 and 35 for ignore. Note, all values are approximated.

7.4.2 Association between Information Veracity and Negotiation of Social Bonds

As previously mentioned, we are determining whether or not the information’s veracity and the type of online social bonding are independent of each other using the chi-square test of independence. This lays the foundation for further investigation into the connection between them. In the chi-square test of independence, the null hypothesis assumes that online social bonding behaviours are independent of the veracity of information. We then check whether the chi-square statistic is over the critical value. From the chi-square test result (p-value = 0.003), at a 1 percent confidence level, we can reject the null hypothesis and accept the alternative hypothesis to conclude that the veracity of the information and the way that users negotiate bonds are not independent. This is also coherent with our previous observation in Section 7.4.1. Given that these two variables are not independent of each other, we can further investigate using the SOR framework (Mehrabian & Russell., Reference Mehrabian and Russell1974) to identify the relationships between information veracity and how users choose to negotiate the bond.

7.4.3 Emotion Distribution between Different Information Veracity

To determine whether and how the veracity of the information affects users’ emotions, for both the truth and misinformation, we used all the data we collected using the provided post IDs and we performed the Kolmogorov-Smirnov (K-S) test on the emotion intensity scores derived from the replies. The results can be found in Table 7.3. This is to test whether the emotional intensity distribution among the two types is the same. When the distribution is different, we can infer that the emotional intensity for that particular emotion is different among the types, which is caused by the veracity of the information. To have a better understanding of which type has the higher intensity score for each emotion, the mean and standard deviation are also calculated for the emotion intensity scores. The intensity scores for all emotions are significantly different (p-value < 0.01). With true information, emotions such as anticipation, joy, sadness, surprise, and trust are higher. With misinformation, on the other hand, emotions such as anger and disgust are higher. These results are consistent with a prior study (Vosoughi et al., Reference Vosoughi, Roy and Aral2018), with a subtle difference. Broadly, true information triggers more positive emotions, while misinformation triggers more negative emotions. Users’ emotions are impacted by the veracity of information.

Table 7.3Mean and standard deviation of emotions among different information veracities, all p-values < 0.001
A table presents the mean and standard deviation of emotions, and K S test D statistic. See long description.
Table 7.3Long description

The table has four columns for the emotion, mean, std, and K S test D statistic. Mean and std have two sub-columns for true and false.

Row 1 reads. Anger. 0.2773. 0.2799. 0.0940. 0.0853. 0.1027.

Row 2 reads. Anticipation. 0.4970. 0.4679. 0.0792. 0.0716. 0.1535.

Row 3 reads. Disgust. 0.2061. 0.2109. 0.0782. 0.0712. 0.1222.

Row 4 reads. Fear. 0.2954. 0.2938. 0.0846. 0.0773. 0.0752.

Row 5 reads. Joy. 0.4641. 0.4235. 0.1214. 0.0997. 0.1489.

Row 6 reads. Sadness. 0.2731. 0.2721. 0.0795. 0.0727. 0.0836.

Row 7 reads. Surprise. 0.2454. 0.2297. 0.0360. 0.0335. 0.1624.

Row 8 reads. Trust. 0.5547. 0.5229. 0.0799. 0.0788. 0.1760.

7.4.4 Ordinary Least Squares Regression Model on Emotion and How Users Negotiate Bonds Using Social Media

The result of an ordinary least squares regression predicting attitude intensity or online social bonding types is reported in Table 7.4. The model is applied to different types of information and is analyzing how and what emotions are affecting social bonding behaviours. Using the ordinary least squares regression model, we depicted how users prefer to connect with the information as their attitude intensity toward the provided information. We quantified what and how emotions contribute to how users bond on social media platforms under different information veracities.

Table 7.4Ordinary least square regression model on emotion and online social bonding behaviours
EmotionTrue informationFalse information
Anger87.15 (43.26)*39.06 (50.48)
Anticipation−40.69 (24.71)−86.78 (28.14)**
Disgust−69.51 (38.80)−86.98 (51.38)
Fear−113.39 (47.21)*−6.55 (50.235)
Joy13.19 (12.40)16.96 (26.41)
Sadness72.65 (29.50)*33.49 (33.61)
Surprise−0.81 (35.64)105.14 (42.21)*
Trust33.49 (21.615)16.56 (24.46)

Notes: Unstandardized regression coefficients reported. Standard error is listed in parentheses. p-values are two-tailed. * p < 0.05. ** p < 0.01.

From the result, we found that, when users are exposed to true information, anger, fear, and sadness contribute significantly to how they connect and bond with the information. Anger and sadness contribute significantly to positive attitude intensity and bond with the true information. Fear, on the other hand, significantly contributes to negative attitude intensity and rejects true information. In contrast, when users are exposed to misinformation, anticipation and surprise are significant factors in determining how users bond with misinformation, while anticipation significantly contributes to a more positive attitude intensity and supports misinformation. Surprise, on the other hand, significantly contributes to rejecting the misinformation. Overall, we found that the relationship between different information veracity and the importance of different emotions in how users bond is different.

The Chi-square test result indicates that, for different information veracity, the social bonding behaviour initiated by users is different. It lays the groundwork for us to further investigate the potential factors contributing to such an outcome. This is consistent with what we saw in Figure 7.5, where users prefer to believe the truth by supporting or deferring the bond and rejecting the bond by dismissing behaviour, indicating whether the information being provided is true or not.

7.5 Discussion

We found that, when exposed to the truth and misinformation, emotion intensities as an internal state of users are different. Our findings are consistent with those of Vosoughi et al. (Reference Vosoughi, Roy and Aral2018). With true information, users have higher levels of anticipation, joy, sadness, and trust. Disgust has a higher intensity when misinformation is presented to users. This also aligns with Vosoughi et al. (Reference Vosoughi, Roy and Aral2018). However, a subtle difference in surprise is observed in our analysis. The intensity of surprise is higher when true information is presented but lower in Vosoughi et al. (Reference Vosoughi, Roy and Aral2018). We suspect this is a case-specific observation since the dataset we used for our analysis was published during the early stages of COVID-19, when the general public still had limited knowledge on this topic. Hence, revealing a higher intensity of surprise when true information is presented is reasonable. This is also supported by the novelty hypothesis from Vosoughi et al. (Reference Vosoughi, Roy and Aral2018), where information that is more novel would receive greater attention and surprise. Similarly with fear, since this is an unknown event, it is reasonable for users to feel fear even when true information is presented.

We found an association between emotions, users’ attitude intensity, and social bonding behaviour when users are exposed to different information veracity. We found that, when users are exposed to misinformation, anticipation and surprise are key factors that contribute significantly to social bonding behaviour. The anticipation emotion contributes significantly to bond rejection. This is no surprise since the more anticipatory a user is, the less likely he or she will believe misinformation. However, the surprise emotion contributes significantly to supporting the misinformation. We suspect there are two reasons for this observation. Firstly, surprise as an emotion would intensify the feelings of other emotions, according to Mellers et al. (Reference Mellers, Fincher, Drummond and Bigony2013). Secondly, when an individual is more emotional, they are more likely to behave irrationally (Pham, Reference Pham2007), in our case by falsely believing in misinformation. In contrast, when users are exposed to true information, the anger, fear, and sadness emotions contribute significantly to how they bond socially. Anger and sadness play an important role in social bonding around the truth. While fear emotions may lead people to reject factual information, we believe that encountering true information can also trigger anger and sadness when this information aligns with our observations and those reported by Vosoughi et al. (Reference Vosoughi, Roy and Aral2018). Hence, these emotions have a greater contribution to attitude intensity, which is understandable. Similarly, it is understandable that fear and emotion can lead individuals to refuse or reject bonding with the truth. Especially in the case of COVID-19, true information could contain messages about the seriousness of the virus. In addition, there were uncertainties about this virus, and fear of the unknown could also cause individuals to withdraw or disassociate themselves from the information. We also suspect the distribution of bonding types can be a cause of the subtle differences between the contributions of emotion intensity and bonding behaviour. As depicted in Sections 7.4.1 and 7.4.2, bonding distributions are different under different veracity.

In an earlier study, Weeks (Reference Weeks2015) examined information behaviour in a political context. Their study only examined two types of emotions and other factors that may influence individuals’ judgments on political-related misinformation. The author found that anger, as an emotion, motivates the evaluation of uncorrected misinformation. Hence, the positive relationship between anger and belief level in our study can be explained by this. In Weeks (Reference Weeks2015), they discovered that anxiety, the second type of emotion they are investigating, promotes initial belief, allowing individuals to believe in the information presented more easily. In our study, when users are exposed to misinformation with a higher intensity of surprise, they are more easily convinced by and believe misinformation. These are two distinct emotions that have similar outcomes when it comes to misinformation. A common factor among anxiety and surprise is the possible unknown in the future (Grupe & Nitschke, Reference Grupe and Nitschke2013). We suspect this is the reason why these two emotions come to the same conclusion. The other factors discussed in their study are case-dependent, such as political standpoints, and they do not apply to our study. Lastly, it is no surprise that, when users experience negative emotions such as fear, they prefer to withdraw rather than bond with the information.

We believe our study examining individuals’ social bonding behaviors from a social perspective will help design mitigation mechanisms to counteract the spread of misinformation. The emotional signals in misinformation can be leveraged to design automatic misinformation mitigation systems. It is based on the understanding that when users are exposed to true information, anger, sadness, and fear are key contributors to socially bonding with the information. Where anger and sadness encourage a bond with true information and fear encourages a rejection bond, we may include more words related to anger and sadness to promote their propagation. In contrast, words related to fear should be minimized since they push users away from creating social bonds with the truth. With misinformation, surprise should be minimized since it leads users to falsely believe in misinformation and bond with the misinformation. Anticipation words, on the other hand, could be promoted to encourage users to disconnect from misinformation.

7.6 Conclusion

In this chapter, we applied the Stimuli–Organism–Response (SOR) framework to examine the relationship and the effect between misinformation, emotions, and social bonding behaviour in an online environment. We found that, with different information veracity, the triggered emotion intensities are different. When exposed to true information, more anger and disgust were triggered, whereas, when exposed to misinformation, higher intensities of anticipation, fear, joy, sadness, surprise, and trust were triggered. Furthermore, we found that, with different information veracity, the key contributing factors to social behaviour are different. Anger and sadness trigger social bonding reactions that support true information. The fear emotion, on the other hand, promotes rejection of true information. When exposed to misinformation, anticipation promotes rejection, and surprise promotes behaviour to support the misinformation. We believe these findings could help future research to include such signals and improve the computational model design and performance in identifying and mitigating misinformation online. Future research can examine ways to leverage such signals in model development to help design models with better performance when incorporating social signals. Our study examining individuals’ social bonding behaviors from a social perspective may help future misinformation identification and mitigation studies design models that better capture misinformation from a more human-understandable perspective.

References

Ai, W., Lu, X., Liu, X., Wang, N., Huang, G., & Mei, Q. (2017). Untangling Emoji Popularity through Semantic Embeddings. Eleventh International AAAI Conference on Web and Social Media.10.1609/icwsm.v11i1.14903CrossRefGoogle Scholar
Allington, D., Duffy, B., Wessely, S., Dhavan, N., & Rubin, J. (2021). Health-protective Behaviour, Social Media Usage and Conspiracy Belief during the COVID-19 Public Health Emergency. Psychological Medicine, 51(10), 17631769.10.1017/S003329172000224XCrossRefGoogle ScholarPubMed
Arora, S., Parida, R. R., & Sahney, S. (2020). Understanding Consumers’ Showrooming Behaviour: A Stimulus–Organism–Response (SOR) Perspective. International Journal of Retail & Distribution Management, 48(11), 11571176.10.1108/IJRDM-01-2020-0033CrossRefGoogle Scholar
Calvo, R. A., & D’Mello, S. (2010). Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications. IEEE Transactions on Affective Computing, 1(1), 1837.CrossRefGoogle Scholar
Chen, J., Liu, Y., & Zou, M. (2017). User Emotion for Modeling Retweeting Behaviors. Neural Networks, 96, 1121.10.1016/j.neunet.2017.08.006CrossRefGoogle ScholarPubMed
Clark, L. F. (1985). Social Knowledge and Inference Processing in Text Comprehension. Advances in Psychology, 29, 95114.CrossRefGoogle Scholar
Cui, L., & Lee, D. (2020). CoAID: COVID-19 Healthcare Misinformation Dataset [arXiv preprint]. arXiv:2006.00885.Google Scholar
Day, S. B., & Gentner, D. (2007). Nonintentional Analogical Inference in Text Comprehension. Memory & Cognition, 35(1), 3949.10.3758/BF03195940CrossRefGoogle ScholarPubMed
Dunn, J. R., & Schweitzer, M. E. (2005). Feeling and Believing: The Influence of Emotion on Trust. Journal of Personality and Social Psychology, 88(5), 736.10.1037/0022-3514.88.5.736CrossRefGoogle ScholarPubMed
Felbo, B., Mislove, A., Søgaard, A., Rahwan, I., & Lehmann, S. (2017). Using Millions of Emoji Occurrences to Learn Any-domain Representations for Detecting Sentiment, Emotion and Sarcasm. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (pp. 1615–1625).10.18653/v1/D17-1169CrossRefGoogle Scholar
Forati, A. M., & Ghose, R. (2021). Geospatial Analysis of Misinformation in COVID-19 Related Tweets. Applied Geography, 133, 102473.CrossRefGoogle ScholarPubMed
Freeman, D., Waite, F., Rosebrock, L., Petit, A., Causier, C., East, A., Jenner, L., Teale, A.-L., Carr, L., Mulhall, S., Bold, E., & Lambe, S. (2022). Coronavirus Conspiracy Beliefs, Mistrust, and Compliance with Government Guidelines in England. Psychological Medicine, 52(2), 251263.10.1017/S0033291720001890CrossRefGoogle ScholarPubMed
Gatautis, R., Vitkauskaite, E., Gadeikiene, A., & Piligrimiene, Z. (2016). Gamification as a Mean of Driving Online Consumer Behaviour: SOR Model Perspective. Engineering Economics, 27(1), 9097.10.5755/j01.ee.27.1.13198CrossRefGoogle Scholar
Godard, R., & Holtzman, S. (2022). The Multidimensional Lexicon of Emojis: A New Tool to Assess the Emotional Content of Emojis. Frontiers in Psychology, 13, 921388.10.3389/fpsyg.2022.921388CrossRefGoogle ScholarPubMed
Grupe, D. W., & Nitschke, J. B. (2013). Uncertainty and Anticipation in Anxiety: An Integrated Neurobiological and Psychological Perspective. Nature Reviews Neuroscience, 14(7), 488501.10.1038/nrn3524CrossRefGoogle ScholarPubMed
Gülşen, T. T. (2016). You Tell Me in Emojis. In Ogata, T. & Akimoto, T. (eds.), Computational and Cognitive Approaches to Narratology (pp. 354375). IGI Global.10.4018/978-1-5225-0432-0.ch014CrossRefGoogle Scholar
Inwood, O., & Zappavigna, M. (2021). Ambient Affiliation, Misinformation and Moral Panic: Negotiating Social Bonds in a YouTube Internet Hoax. Discourse & Communication, 15(3), 281307.10.1177/1750481321989838CrossRefGoogle Scholar
Jaeger, S. R., & Ares, G. (2017). Dominant Meanings of Facial Emoji: Insights from Chinese Consumers and Comparison with Meanings from Internet Resources. Food Quality and Preference, 62, 275283.10.1016/j.foodqual.2017.04.009CrossRefGoogle Scholar
Ke, Q., Du, J. T., & Ji, L. (2021). Towards a Conceptual Framework of Health Crisis Information Needs: An Analysis of COVID-19 Questions in a Social Q&A Website. Journal of Documentation, 77(4), 851870.10.1108/JD-10-2020-0173CrossRefGoogle Scholar
Knight, N. K. (2008). “Still Cool … and American Too!”: An SFL Analysis of Deferred Bonds in Internet Messaging Humour. Systemic Functional Linguistics in Use, Odense Working Papers in Language and Communication, 29, 481502.Google Scholar
Knight, N. K. (2010). Wrinkling Complexity: Concepts of Identity and Affiliation in Humour. In Bednarek, M. & Martin, J. R. (eds.), New Discourse on Language: Functional Perspectives on Multimodality, Identity, and Affiliation (pp. 3558). Continuum.Google Scholar
Knight, N. K. (2013). Evaluating Experience in Funny Ways: How Friends Bond through Conversational Humor. Text & Talk, 33(4–5), 553574.10.1515/text-2013-0025CrossRefGoogle Scholar
Leng, Y., Zhai, Y., Sun, S., Wu, Y., Selzer, J., Strover, S., Zhang, H., Chen, A., & Ding, Y. (2021). Misinformation during the COVID-19 Outbreak in China: Cultural, Social and Political Entanglements. IEEE Transactions on Big Data, 7(1), 6980.10.1109/TBDATA.2021.3055758CrossRefGoogle Scholar
Li, M.-H., Chen, Z., & Rao, L.-L. (2022). Emotion, Analytic Thinking and Susceptibility to Misinformation during the COVID-19 Outbreak. Computers in Human Behavior, 133, 107295.10.1016/j.chb.2022.107295CrossRefGoogle ScholarPubMed
Li, Y., Feng, X., & Zhang, S. (2016). Detecting Fake Reviews Utilizing Semantic and Emotion Model. In 2016 3rd International Conference on Information Science and Control Engineering (ICISCE) (pp. 317320). IEEE.10.1109/ICISCE.2016.77CrossRefGoogle Scholar
Marengo, D., Giannotta, F., & Settanni, M. (2017). Assessing Personality Using Emoji: An Exploratory Study. Personality and Individual Differences, 112, 7478.10.1016/j.paid.2017.02.037CrossRefGoogle Scholar
Martin, J. R., & White, P. R. (2003). The Language of Evaluation, Vol. 2. Springer.Google Scholar
Mehrabian, A., & Russell, J. A. (1974). An Approach to Environmental Psychology. MIT Press.Google Scholar
Mellers, B., Fincher, K., Drummond, C., & Bigony, M. (2013). Surprise: A Belief or an Emotion? Progress in Brain Research, 202, 319.10.1016/B978-0-444-62604-2.00001-0CrossRefGoogle ScholarPubMed
Mohammad, S. M., & Turney, P. D. (2013). Crowdsourcing a Word–Emotion Association Lexicon. Computational Intelligence, 29(3), 436465.10.1111/j.1467-8640.2012.00460.xCrossRefGoogle Scholar
Pak, A., & Paroubek, P. (2010). Twitter as a Corpus for Sentiment Analysis and Opinion Mining. Proceedings of the International Conference on Language Resources and Evaluation, LREC, May 17–23, Valetta, Malta.Google Scholar
Pham, M. T. (2007). Emotion and Rationality: A Critical Review and Interpretation of Empirical Evidence. Review of General Psychology, 11(2), 155178.10.1037/1089-2680.11.2.155CrossRefGoogle Scholar
Phan, W. M. J., Amrhein, R., Rounds, J., & Lewis, P. (2019). Contextualizing Interest Scales with Emojis: Implications for Measurement and Validity. Journal of Career Assessment, 27(1), 114133.10.1177/1069072717748647CrossRefGoogle Scholar
Plutchik, R. (1980). A General Psychoevolutionary Theory of Emotion. In Plutchik, R. & Kellerman, H. (eds.), Theories of Emotion (pp. 333). Academic Press.10.1016/B978-0-12-558701-3.50007-7CrossRefGoogle Scholar
Pratt, T. C., Gau, J. M., & Franklin, T. W. (2011). Key Idea: Hirschi’s Social Bond/Social Control Theory. In Key Ideas in Criminology and Criminal Justice (pp. 5569). Sage Publications.10.4135/9781483388045.n5CrossRefGoogle Scholar
Sampat, B., & Raj, S. (2022). Fake or Real News? Understanding the Gratifications and Personality Traits of Individuals Sharing Fake News on Social Media Platforms. Aslib Journal of Information Management, 74(5), 840876.10.1108/AJIM-08-2021-0232CrossRefGoogle Scholar
Sherman, E., Mathur, A., & Smith, R. B. (1997). Store Environment and Consumer Purchase Behavior: Mediating Role of Consumer Emotions. Psychology & Marketing, 14(4), 361378.3.0.CO;2-7>CrossRefGoogle Scholar
Slama, M. E., & Tashchian, A. (1987). Validating the SOR paradigm for consumer involvement with a convenience good. Journal of the Academy of Marketing Science, 15(1), 3645.10.1007/BF02721952CrossRefGoogle Scholar
Soroya, S. H., Farooq, A., Mahmood, K., Isoaho, J., & Zara, S. E. (2021). From Information Seeking to Information Avoidance: Understanding the Health Information Behavior during a Global Health Crisis. Information Processing & Management, 58(2), 102440.10.1016/j.ipm.2020.102440CrossRefGoogle ScholarPubMed
Uscinski, J. E., Enders, A. M., Klofstad, C., Seelig, M., Funchion, J., Everett, C., Wuchty, S., Premaratne, K., & Murthi, M. (2020). Why Do People Believe COVID-19 Conspiracy Theories? Harvard Kennedy School Misinformation Review, 1(3). https://misinforeview.hks.harvard.edu/article/why-do-people-believe-covid-19-conspiracy-theories/Google Scholar
Vosoughi, S., Roy, D., & Aral, S. (2018). The Spread of True and False News Online. Science, 359(6380), 11461151.10.1126/science.aap9559CrossRefGoogle ScholarPubMed
Wang, C. Y., Seng-cho, T. C., & Chang, H. C. (2009). Emotion and Motivation: Understanding User Behavior of Web 2.0 Application. 2009 Sixth International Conference on Information Technology: New Generations (pp. 13411346). IEEE.Google Scholar
Wang, Y., McKee, M., Torbica, A., & Stuckler, D. (2019). Systematic Literature Review on the Spread of Health-related Misinformation on Social Media. Social Science & Medicine, 240, 112552.10.1016/j.socscimed.2019.112552CrossRefGoogle ScholarPubMed
Weeks, B. E. (2015). Emotions, Partisanship, and Misperceptions: How Anger and Anxiety Moderate the Effect of Partisan Bias on Susceptibility to Political Misinformation. Journal of Communication, 65(4), 699719.10.1111/jcom.12164CrossRefGoogle Scholar
Wu, M. (2022). What Drives People to Share Misinformation on Social Media during the COVID-19 Pandemic: A Stimulus-Organism-Response Perspective. International Journal of Environmental Research and Public Health, 19(18), 11752.CrossRefGoogle ScholarPubMed
Xiao, X., & Su, Y. (2022). Stumble on Information or Misinformation? Examining the Interplay of Incidental News Exposure, Narcissism, and New Media Literacy in Misinformation Engagement. Internet Research.Google Scholar
Zhou, X., Shu, K., Phoha, V. V., Liu, H., & Zafarani, R. (2022). “This Is Fake! Shared It by Mistake”: Assessing the Intent of Fake News Spreaders. Proceedings of the ACM Web Conference 2022 (pp. 3685–3694).10.1145/3485447.3512264CrossRefGoogle Scholar
Figure 0

Figure 7.1 Dialogic affiliation systemFigure 7.1 long description.

Figure 1

Table 7.1 Mapping between online social bonding behaviors and attitude intensityTable 7.1 long description.

Figure 2

Figure 7.2 The conceptual SOR modelFigure 7.2 long description.

Figure 3

Figure 7.3 Example source and reply postsFigure 7.3 long description.

Figure 4

Table 7.2 Examples for annotation of social bonding

Figure 5

Figure 7.4 The overall process of obtaining emotion intensity scores

Figure 6

Figure 7.5 Online social bonding behaviours distribution by information veracityFigure 7.5 long description.

Figure 7

Table 7.3 Mean and standard deviation of emotions among different information veracities, all p-values < 0.001Table 7.3 long description.

Figure 8

Table 7.4 Ordinary least square regression model on emotion and online social bonding behaviours

Accessibility standard: Unknown

Accessibility compliance for the HTML of this book is currently unknown and may be updated in the future.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge-org.demo.remotlog.com is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×