- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
lonelygirl15. November 2023. Page Version ID: 1186146298. URL: https://en.wikipedia.org/w/index.php?title=Lonelygirl15&oldid=1186146298 (visited on 2023-11-24).
It was fascinating to learn from the Lonelygirl15 Wikipedia page because it gave more context as to how revolutionary this web show really was. Something I did notice particularly is how it is described as being one of the first "web series" attempts and an early example of interactive storytelling. I'm surprised by how something that began as seemingly just a vlog turned out to shape all of online video content. It makes me wonder—which would happen today, or have we become desensitized to blurred lines of fiction versus reality on TikTok and Instagram?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Early in the days of YouTube, one YouTube channel (lonelygirl15 [f1]) started to release vlogs (video web logs) consisting of a girl in her room giving updates on the mundane dramas of her life. But as the channel continued posting videos and gaining popularity, viewers started to question if the events being told in the vlogs were true stories, or if they were fictional. Eventually, users discovered that it was a fictional show, and the girl giving the updates was an actress.
I thought there was something particularly interesting about lonelygirl15's story in that it illustrates how much responsibility there is to being authentic online. The fact that "humans don't like being fooled" really resonated with me—I have certainly felt that way when I discovered something I had considered to be true later turned out to have been staged or manufactured. And, I have to admit, I also think that something is sort of interesting in that despite the revelation of truth, the channel just kept growing. People may have been upset initially, but they also realized that the narrative being told really was good, and they still wanted to know what occurred. It makes me wonder if, even though we appreciate authenticity, we just sort of love a good story even if it isn't "real."
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Tom Standage. Writing on the Wall: Social Media - The First 2,000 Years. Bloomsbury USA, New York, 1st edition edition, October 2013. ISBN 978-1-62040-283-2
Tom Standage’s Writing on the Wall: Social Media – The First 2,000 Years is an interesting take in that he contends that social media is not newly developed, but rather follows a long lineage of disseminating information socially. One point that was particularly interesting to me was his description of handwritten letters and copied books serving as early forms of "viral posts," only reaching their destinations if people deemed them helpful enough to replicate. This completely changed my way of looking at social media—it's not only an internet product, but an ingrained human behavior.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Before this centralization of media in the 1900s, newspapers and pamphlets were full of rumors and conspiracy theories [e2]. And now as the internet and social media have taken off in the early 2000s, we are again in a world full of rumors and conspiracy theories.
This made me think about just how similar today's internet and social media landscape is to that of the pre-centralized media era. It’s so easy to assume that distortion is a new phenomenon, but that quote demonstrates that it’s always been an aspect of people’s communication—it’s just with different mediums. It’s got me wondering if the problem isn’t so much stopping rumor, but educating people to think critically regardless of the platform.
-
- Apr 2025
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Twitter. November 2023. Page Version ID: 1187856185. URL: https://en.wikipedia.org/wiki/Twitter (visited on 2023-12-01).
One of the details that impressed me was that Twitter has undergone a large number of structural and policy changes after Musk's acquisition, including changes to content moderation standards and user verification mechanisms. This made me realize that the operating policies behind a data platform are themselves important “metadata” that affect the quality and usability of the data. For example, if I download a certain type of speech data from Twitter today, I must consider whether it has been affected by policies during a particular period, otherwise the analysis conclusions may be distorted.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Images are created by defining a grid of dots, called pixels. Each pixel has three numbers that define the color (red, green, and blue), and the grid is created as a list (rows) of lists (columns).
That makes me think of the color correction process when I would edit photographs using Photoshop in the old days. Previously, I would simply adjust the RGB values according to my gut feel, but now I realize that it is actually the most elementary data structure that operates in the background. I believe that knowing these concepts can enable us to view how images get digitized as well as how their analysis is done even more rationally in applications like image recognition or AI-created imagery."
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata).
That made me realize that the distinction between data and metadata is not absolute but rather relative to the research objective and perspective. Earlier on, I would regard the means of collection or the moment in time as "incidental information", but now I appreciate that these are actually significant metadata that can affect the analysis result.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Steven Tweedie. This disturbing image of a Chinese worker with close to 100 iPhones reveals how App Store rankings can be manipulated. February 2015. URL: https://www.businessinsider.com/photo-shows-how-fake-app-store-rankings-are-made-2015-2 (visited on 2024-03-07).
This text made me deeply realize that beneath algorithms and leaderboards, a lot of artificially created “illusion” can actually exist. The article paints a picture of something shocking: a worker handling close to 100 iPhones just to increase the download count of a particular app. This form of “human algorithmic manipulation” is identical to the click farm we discussed in the chapter, both of which deploy human labor to mimic automatized behavior and manipulate social or platform statistics. It made me realize that the authenticity of the data is not necessarily secure, and so-called “popularities” could be madeup. It also made me question whether the trends recommended to me in app stores or social media sites are actually authentic.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
There is no way in which police can maintain dignity in seizing and destroying a donkey on whose flank a political message has been inscribed.”
I was particularly impressed by this sentence, since it reveals the wit and strategy of the protest. Employing a donkey as a means of disseminating a political message not only evaded police repression directly, but also ridiculated the ridiculousness of the police. The kind of creativity here is reminiscent of today's "meme politics" on social media: people employ humorous and ridiculous means to articulate serious stances, which can also serve the function of communication but not get censored directly. This reveals that in a setting with limited information, protesters devise very sophisticated means of expression.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them.
This sentence made me realize that the “people” we encounter on social media may actually not be people at all. Thinking about the fact that I used to argue with “people” for a long time under certain topics, only to find out that it was probably an automated program speaking, made me doubt the authenticity of social media. At the same time, it also raised a question: if a bot can imitate human interaction habits, will the foundation of our trust in “identity” and “speech” also be shaken?
-