- Jul 2023
-
arxiv.org arxiv.org
-
In traditional artforms characterized by direct manipulation [32]of a material (e.g., painting, tattoo, or sculpture), the creator has a direct hand in creating thefinal output, and therefore it is relatively straightforward to identify the creator’s intentions andstyle in the output. Indeed, previous research has shown the relative importance of “intentionguessing” in the artistic viewing experience [33, 34], as well as the increased creative valueafforded to an artwork if elements of the human process (e.g., brushstrokes) are visible [35].However, generative techniques have strong aesthetics themselves [36]; for instance, it hasbecome apparent that certain generative tools are built to be as “realistic” as possible, resultingin a hyperrealistic aesthetic style. As these aesthetics propagate through visual culture, it can bedifficult for a casual viewer to identify the creator’s intention and individuality within the out-puts. Indeed, some creators have spoken about the challenges of getting generative AI modelsto produce images in new, different, or unique aesthetic styles [36, 37].
Traditional artforms (direct manipulation) versus AI (tools have a built-in aesthetic)
Some authors speak of having to wrestle control of the AI output from its trained style, making it challenging to create unique aesthetic styles. The artist indirectly influences the output by selecting training data and manipulating prompts.
As use of the technology becomes more diverse—as consumer photography did over the last century, the authors point out—how will biases and decisions by the owners of the AI tools influence what creators are able to make?
To a limited extent, this is already happening in photography. The smartphones are running algorithms on image sensor data to construct the picture. This is the source of controversy; see Why Dark and Light is Complicated in Photographs | Aaron Hertzmann’s blog and Putting Google Pixel's Real Tone to the test against other phone cameras - The Washington Post.
Tags
Annotators
URL
-
- Mar 2023
-
idlewords.com idlewords.com
-
we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias. It's a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don't lie.
Machine learning like money laundering for bias
-
- Apr 2022
-
www.theatlantic.com www.theatlantic.com
-
Before 2009, Facebook had given users a simple timeline––a never-ending stream of content generated by their friends and connections, with the newest posts at the top and the oldest ones at the bottom. This was often overwhelming in its volume, but it was an accurate reflection of what others were posting. That began to change in 2009, when Facebook offered users a way to publicly “like” posts with the click of a button. That same year, Twitter introduced something even more powerful: the “Retweet” button, which allowed users to publicly endorse a post while also sharing it with all of their followers. Facebook soon copied that innovation with its own “Share” button, which became available to smartphone users in 2012. “Like” and “Share” buttons quickly became standard features of most other platforms.Shortly after its “Like” button began to produce data about what best “engaged” its users, Facebook developed algorithms to bring each user the content most likely to generate a “like” or some other interaction, eventually including the “share” as well. Later research showed that posts that trigger emotions––especially anger at out-groups––are the most likely to be shared.
The Firehose versus the Algorithmic Feed
See related from The Internet Is Not What You Think It Is: A History, A Philosophy, A Warning, except with more depth here.
-
-
lareviewofbooks.org lareviewofbooks.org
-
Algorithms in themselves are neither good nor bad. And they can be implemented even where you don’t have any technology to implement them. That is to say, you can run an algorithm on paper, and people have been doing this for many centuries. It can be an effective way of solving problems. So the “crisis moment” comes when the intrinsically neither-good-nor-bad algorithm comes to be applied for the resolution of problems, for logistical solutions, and so on in many new domains of human social life, and jumps the fence that contained it as focusing on relatively narrow questions to now structuring our social life together as a whole. That’s when the crisis starts.
Algorithms are agnostic
As we know them now, algorithms—and [[machine learning]] in general—do well when confined to the domains in which they started. They come apart when dealing with unbounded domains.
-
- Mar 2022
-
static1.squarespace.com static1.squarespace.com
-
algorithmic embedding and enhancement of biases that reinforceracism, sexism, and structural inequality
Of note.
-
-
cacm.acm.org cacm.acm.org
-
computers might therefore easily outperform humans at facial recognition and do so in a much less biased way than humans. And at this point, government agencies will be morally obliged to use facial recognition software since it will make fewer mistakes than humans do.
Banning it now because it isn't as good as humans leaves little room for a time when the technology is better than humans. A time when the algorithm's calculations are less biased than human perception and interpretation. So we need rigorous methodologies for testing and documenting algorithmic machine models as well as psychological studies to know when the boundary of machine-better-than-human is crossed.
-
-
www.nature.com www.nature.com
-
In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible.
Although the model was driven "towards compounds such as the nerve agent VX", it found VX but also many other known chemical warfare agents and many new molecules...that looked equally plausible."
AI is the tool. The parameters by which it is set up makes something "good" or "bad".
-
- Aug 2020
-
www.youtube.com www.youtube.com
-
Identifying social media manipulation with OSoMe tools. (2020, August 11). https://www.youtube.com/watch?v=1BMv0PrdVGs&feature=youtu.be
-
- Nov 2018
-
logicmag.io logicmag.io
-
how does misrepresentative information make it to the top of the search result pile—and what is missing in the current culture of software design and programming that got us here?
Two core questions in one? As to "how" bad info bubbles to the top of our search results, we know that the algorithms are proprietary—but the humans who design them bring their biases. As to "what is missing," Safiya Noble suggests here and elsewhere that the engineers in Silicon Valley could use a good dose of the humanities and social sciences in their decision-making. Is she right?
Tags
Annotators
URL
-