40 Matching Annotations
  1. Last 7 days
    1. To complicate matters, bots evolve rapidly. They are now in their 4th generation of sophistication, with evasion techniques so advanced they require the most powerful technology to combat them. Generation 1 – Basic scripts making cURL-like requests from a small number of IP addresses. These bots can’t store cookies or execute JavaScript and can be easily detected and mitigated through blacklisting its IP address and User-Agent combination. Generation 2 – Leverage headless browsers such as PhantomJS and can store cookies and execute JavaScript. They require a more sophisticated, IP-agnostic approach such as device-fingerprinting, by collecting their unique combination of browser and device characteristics — such as the OS, JavaScript variables, sessions and cookies info, etc. Generation 3 – These bots use full-fledged browsers and can simulate basic human-like patterns during interactions, like simple mouse movements and keystrokes. This behavior makes it difficult to detect; these bots normally bypass traditional security solutions, requiring a more sophisticated approach than blacklisting or fingerprinting. Generation 4 – These bots are the most sophisticated. They use more advanced human-like interaction characteristics (so shallow-interaction based detection yields False Positives) and are distributed across tens of thousands of IP addresses. And they can carry out various violations from various sources at various (random) times, requiring a high level of intelligence, correlation and contextual analysis.

      Good way to categorize bots

  2. Apr 2021
    1. empty is an utility that provides an interface to execute and/or interact with processes under pseudo-terminal sessions (PTYs). This tool is definitely useful in programming of shell scripts designed to communicate with interactive programs like telnet, ssh, ftp, etc.
  3. Mar 2021
    1. Just as we've become super-human thanks to telephones, calendars and socks, we can continue our evolution into cyborgs in a concrete jungle with socially curated bars and mathematically incorruptible governance.
    2. we should eagerly anticipate granting ourselves the extra abilities afforded to us by Turing machines
    3. Stop thinking of the ideal user as some sort of honorable, frontier pilgrim; a first-class citizen who carries precedence over the lowly bot. Bots need to be granted the same permission as human users and it’s counter-productive to even think of them as separate users. Your blind human users with screen-readers need to behave as “robots” sometimes and your robots sending you English status alerts need to behave as humans sometimes.
    1. One person writing a tweet would still qualify for free-speech protections—but a million bot accounts pretending to be real people and distorting debate in the public square would not.

      Do bots have or deserve the right to not only free speech, but free reach?

  4. Feb 2021
    1. It turns out that, given a set of constraints defining a particular problem, deriving an efficient algorithm to solve it is a very difficult problem in itself. This crucial step cannot yet be automated and still requires the insight of a human programmer.
  5. Jan 2021
    1. bots

      The word bot is described as, "a software program that can execute commands, reply to messages, or perform routine tasks, as online searches, either automatically or with minimal human intervention (often used in combination)." (Dictionary.com) As the chapter describes, it is extremely interesting how bots have become prominent "online" and how for an average user, it is oftentimes difficult to decipher a bot from human users. I wonder if bots can have true identities or if a user must be human in order to have a true identity on the web?

  6. Nov 2020
    1. About auto-close bots... I can appreciate the need for issue grooming, but surely there must a better way about it than letting an issue or PR's fate be semi-permanently decided and auto-closed by an unknowing bot. Should I be periodically pushing up no-op commits or adding useless "bump" comments in order to keep that from happening? I know the maintainers are busy people, and that it can take a long time to work through and review 100s of open issues and PRs, so out of respect to them, I was just taking a "be patient; they'll get to it when they get to it" approach. Sometimes an issue is not so much "stale" as it is unnoticed, forgotten about, or consciously deferred for later. So if anything, after a certain length of time, if a maintainer still hasn't reviewed/merged/accepted/rejected a pull request, then perhaps it should instead be auto-bumped, put on top of the queue, to remind them that they (preferably a human) still need to review it and make a decision about its fate... :)
  7. Jul 2020
  8. Jun 2020
  9. Mar 2020
    1. we have anxious salarymen asking about the theft of their jobs, in the same way that’s apparently done by immigrants
    2. We long ago admitted that we’re poor at scheduling, so we have roosters; sundials; calendars; clocks; sand timers; and those restaurant staff who question my integrity, interrupting me with a phone call under the premise of “confirming” that I’ll stick to my word regarding my reservation.
    3. A closely-related failing to scheduling is our failure to remember, so humans are very willing to save information on their computers for later.
    1. If these asset owners regarded the “robots” as having the same status as guide dogs, blind people or default human citizens, they would undoubtedly stop imposing CAPTCHA tests and just offer APIs with reasonable limits applied.
    2. Robots are currently suffering extreme discrimination due to a few false assumptions, mainly that they’re distinctly separate actors from humans. My point of view is that robots and humans often need to behave in the same way, so it’s a fruitless and pointless endeavour to try distinguishing them.
    3. As technology improves, humans keep integrating these extra abilities into our cyborg selves
    4. In order to bypass these discriminatory CAPTCHA filters
  10. Feb 2020
  11. Nov 2019
    1. it might be due to the navigator.webdriver DOM property being true by default in Selenium-driven browsers. In Firefox, you can set the dom.webdriver.enabled config variable to false (go to about:config to change the variable), which disables this property. In my case this stopped reCAPTCHA triggering.
    2. Length of your browsing sessions (Bots have predictable short browsing sessions)
    1. many websites may try to prevent automated account creation, credential stuffing, etc by going beyond CAPTCHA and try to infer from different signals of the UA if it is controlled by automation.Processing all those signals on every request is often expensive, and if a co-operating UA is willing to inform a website that it controlled by automation, it is possible to reduce further processing.For instance, Selenium with Chrome is adding a specifically named property on document object under certain conditions, or phantomJS is adding a specifically named property on global object. Recompiling the framework/browser engine to change that identifier to circumvent the detection is always possible.WebDriver specification is just standardizing a mechanism to inform a website that co-operating user agent is controlled by automation. I don't think denial-of-service attack is the best example, so hopefully this change will clarify the goal. 
    2. Determined "attackers" would simply remove the property, be it by re-compiling Chromium, or by injecting an early script (removing [Unforgeable] makes sure the latter is possible, I believe).Even non-determined ones could, when using the latter (it will simply be a built in part/option of the package for automated testing libraries).I think it provides no protection whatsoever and makes websites have some false sense of assurance.It is like using Content-Security-Policy and forget about any other client side protection - what about browsers that do not support it? What about browsers without that feature (manually disabled, using about:config, --disable-blink-features and the like, or customized builds)?I mean, it could be a nice property for other purposes (determining test mode and running some helper code that exposes a stable API for identifying key elements in the user interface, say, though I do not think that is a best practice), but certainly not for any abuse prevention.
  12. Aug 2019
    1. I can still recall playing with the “pseudo-AI” playgrounds of the late 1990s — plugging AOL AIM messenger up to a response engine. Lots of fun!Well, things have come a long way and I thought that I’d take a stab at doing some fun stuff with A.I. and one of my favorite platforms to hack around in — Twitter.In this post I’m going to show how you can 1) create an AI based on your twitter account and 2) automatically tweet out whatever your AI wants to. Twitter is actually the perfect playground for such ventures. Lots of sample texts, concrete themes, easy sampling…

      Imagine having an entire technological entity with a mind of its own at your own disposal.... It's real and it exists. Artificial Intelligence bots are built to respond and interact with real users based on whatever it chooses to say itself. This concept is so intriguing because the question must be raised; Can this technology grow to be more powerful than human control?

  13. Mar 2019
    1. If you do not like the price you’re being offered when you shop, do not take it personally: many of the prices we see online are being set by algorithms that respond to demand and may also try to guess your personal willingness to pay. What’s next? A logical next step is that computers will start conspiring against us. That may sound paranoid, but a new study by four economists at the University of Bologna shows how this can happen.
  14. Oct 2018
  15. Aug 2018
    1. The first of the two maps in the GIF image below shows the US political spectrum on the eve of the 2016 election. The second map highlights the followers of a 30-something American woman called Jenna Abrams, a following gained with her viral tweets about slavery, segregation, Donald Trump, and Kim Kardashian. Her far-right views endeared her to conservatives, and her entertaining shock tactics won her attention from several mainstream media outlets and got her into public spats with prominent people on Twitter, including a former US ambassador to Russia. Her following in the right-wing Twittersphere enabled her to influence the broader political conversation. In reality, she was one of many fake personas created by the infamous St. Petersburg troll farm known as the Internet Research Agency.
    2. Instead of trying to force their messages into the mainstream, these adversaries target polarized communities and “embed” fake accounts within them. The false personas engage with real people in those communities to build credibility. Once their influence has been established, they can introduce new viewpoints and amplify divisive and inflammatory narratives that are already circulating. It’s the digital equivalent of moving to an isolated and tight-knit community, using its own language quirks and catering to its obsessions, running for mayor, and then using that position to influence national politics.
  16. Jul 2018
    1. RuNet Echo has previously written about the efforts of the Russian “Troll Army” to inject the social networks and online media websites with pro-Kremlin rhetoric. Twitter is no exception, and multiple users have observed Twitter accounts tweeting similar statements during and around key breaking news and events. Increasingly active throughout Russia's interventions in Ukraine, these “bots” have been designed to look like real Twitter users, complete with avatars.
  17. May 2017
    1. Multi-party Conversational Systems are systems with natural language interactionbetween one or more people or systems. From the moment that an utterance is sent toa group, to the moment that it is replied in the group by a member, several activitiesmust be done by the system: utterance understanding, information search, reasoning,among others. In this paper we present the challenges of designing and building multi-party conversational systems, the state of the art, our proposed hybrid architectureusing both norms and machine learning and some insights after implementing andevaluating one on the finance domain.

      Conversational Systems

  18. Feb 2016
    1. For example, I got the great idea to link my social bot designed to assess the “temperature” of online communities up to a piece of hardware designed to produce heat. I didn’t think to cap my assessment of the communities and so when my bot stumbled upon a super vibrant space and offered back a quantitative measure intended to signal that the community was “hot,” another piece of my code interpreted this to mean: jack the temperature up the whole way. I was holding that hardware and burnt myself. Dumb. And totally, 100% my fault.

      "Give a bot a heat gun" seems like the worst idea possible.

    2. Bots are first and foremost technical systems, but they are derived from social values and exert power into social systems.

      This is very important to keep in mind. "Bots exert power into social systems."

    3. Bots are tools, designed by people and organizations to automate processes and enable them to do something technically, socially, politically, or economically

      Interesting that danah sees all bots as tools, including art bots! She'd probably categorize those as things that do something "socially"