45 Matching Annotations
  1. Oct 2022
    1. Trolls, in this context, are humans who hold accounts on social media platforms, more or less for one purpose: To generate comments that argue with people, insult and name-call other users and public figures, try to undermine the credibility of ideas they don’t like, and to intimidate individuals who post those ideas. And they support and advocate for fake news stories that they’re ideologically aligned with. They’re often pretty nasty in their comments. And that gets other, normal users, to be nasty, too.

      Not only programmed accounts are created but also troll accounts that propagate disinformation and spread fake news with the intent to cause havoc on every people. In short, once they start with a malicious comment some people will engage with the said comment which leads to more rage comments and disagreements towards each other. That is what they do, they trigger people to engage in their comments so that they can be spread more and produce more fake news. These troll accounts usually are prominent during elections, like in the Philippines some speculates that some of the candidates have made troll farms just to spread fake news all over social media in which some people engage on.

    2. So, bots are computer algorithms (set of logic steps to complete a specific task) that work in online social network sites to execute tasks autonomously and repetitively. They simulate the behavior of human beings in a social network, interacting with other users, and sharing information and messages [1]–[3]. Because of the algorithms behind bots’ logic, bots can learn from reaction patterns how to respond to certain situations. That is, they possess artificial intelligence (AI). 

      In all honesty, since I don't usually dwell on technology, coding, and stuff. I thought when you say "Bot" it is controlled by another user like a legit person, never knew that it was programmed and created to learn the usual patterns of posting of some people may be it on Twitter, Facebook, and other social media platforms. I think it is important to properly understand how "Bots" work to avoid misinformation and disinformation most importantly during this time of prominent social media use.

  2. Aug 2022
  3. Jun 2022
    1. algorithmic radicalization is presumably a simpler problem to solve than the fact that there are people who deliberately seek out vile content. “These are the three stories—echo chambers, foreign influence campaigns, and radicalizing recommendation algorithms—but, when you look at the literature, they’ve all been overstated.”

      algorithmic radicalization

  4. Feb 2022
  5. Jan 2022
  6. Dec 2021
  7. Nov 2021
  8. Oct 2021
    1. We propose a tri-relationship embedding framework TriFN, which models publisher-news relations and user-news interactions simultaneously for fake news classification. We conduct experiments on two real-world datasets, which demonstrate that the proposed approach significantly outperforms other baseline methods for fake news detection.

      It was said in the conclusion that the TriFN can have a good fake news detection performance in the early stage of information dissemination because of the interactions in social media. User credibility was also mentioned since low credibility users tend to spread fake news.

      This means that users play a big part in detecting and reducing fake news in social media. Let's be responsible to only share credible news articles and report the misleading ones.

  9. Aug 2021
  10. Jul 2021
  11. Jun 2021
  12. Mar 2021
  13. Feb 2021
  14. Dec 2020
  15. Nov 2020
  16. Oct 2020
  17. Aug 2020
  18. Jul 2020
  19. Jun 2020
  20. May 2020
  21. Apr 2020
  22. Nov 2018
    1. As deepfakes make their way into social media, their spread will likely follow the same pattern as other fake news stories. In a MIT study investigating the diffusion of false content on Twitter published between 2006 and 2017, researchers found that “falsehood diffused significantly farther, faster, deeper, and more broadly than truth in all categories of information.” False stories were 70 percent more likely to be retweeted than the truth and reached 1,500 people six times more quickly than accurate articles.

      This sort of research should make it eaiser to find and stamp out from the social media side of things. We need regulations to actually make it happen however.