17 Matching Annotations
  1. Last 7 days
    1. https://web.archive.org/web/20241115135937/https://workforcefuturist.substack.com/p/ai-agents-building-your-digital-workforce

      On AI agents, and the engineering to get one going. A few things stand out at first glance: frames it as the next hype (Vgl plateau in model dev), says it's for personal tools (doesn't square w hype which vc-fuelled, personal tools not of interest to them), and mentions a few personal use cases. e.g. automation, vgl [[Open Geodag 20241107100937]] Ed Parsons of Google AI on the same topic.

  2. Nov 2024
    1. these teammates

      Like MS Teams is your teammate, like your accounting software is your teammate. Do they call their own Atlassian tools teammates too? Do these people at Atlassian get out much? Or don't they realise that the other handles in their Slack channel represent people not just other bits of software? Remote work led to dehumanizing co-workers? How else to come up with this wording? Nothing makes you sound more human like talking about 'deploying' teammates. My money is on this article was mostly generated. Reverse-Turing says it's up to them to say otherwise.

    2. There’s a lot to be said for the promise that AI agents bring to organizations.

      And as usual in these articles the truth is at the end, it's again just promises.

    3. People should always be at the center of an AI application, and agents are no different

      At the center of an AI application, like what, mechanical Turks?

    4. Don’t – remove the human aspect

      After a section celebrating examples doing just that!

    5. As various agents start to take care of routine tasks, provide real-time insights, create first drafts, and more, team members can focus on more meaningful interactions, collaboration,

      This sentence preceded by 2 examples where interactions and collaboration were delegated to bots to hand-out generated warm feelings, does not convey much positive about Atlassian. This basically says that a lot of human interaction in the or is seen as meaningless, and please go do that with a bot, not a colleague. Did their branding ai-agent write this?

    6. gents can also help build team morale by highlighting team members' contributions and encouraging colleagues to celebrate achievements through suggested notes

      Like Linked-In wants you to congratulate people on their work-anniversary?

    7. One of my favorite use cases for agents is related to team culture. Agents can be a great onboarding buddy — getting new team members up to speed by providing them with key information, resources, and introductions to team members.

      Welcome in our company, you'll meet your first human colleague after you've interacted with our onboarding-robot for a week. No thanks.

    8. inviting a new AI agent to join your team in service of your shared goa

      anthropomorphing should be in this article's don't list. 'inviting someone on your team' is a highly social thing. Bringing in a software tool is a different thing.

    9. One of our most popular agent use cases for a while was during our yearly performance reviews a few months back. People pointed an agent to our growth profiles and had it help them reframe their self-reflections to better align with career development goals and expectations. This was a simple agent to create an application that helped a wide range of Atlassians with something of high value to them.

      An AI agent to help you speak corporate better, because no one actually writes/reflects/talks that way themselves. How did the receivers of these reports perceive this change in reports? Did they think it was better Q, or did all reflections now read the same?

    10. Start by practising and experimenting with the basics, like small, repetitive tasks. This is often a great mix of value (time saved for you) and likely success (hard for the agent to screw up). For example, converting a simple list of topics into an agenda is one step of preparing for a meeting, but it's tedious and something that you can enlist an agent to do right away

      Low end tasks for agents don't really need AI do they. Vgl Ed Parsons last week wrt automation as AI focus.

    11. For instance, a 'Comms Crafter' agent is specialized in all things content, from blogs to press releases, and is designed to adhere to specific brand guidelines. A 'Decision Director' agent helps teams arrive at effective decisions faster by offering expertise on our specific decision-making framework. In fact, in less than six months, we’ve already created over 500 specialized agents internally.

      This does not fully chime with my own perception of (AI) agents. At least the titles don't. The tails of descriptions 'trained to adhere to brand guidelines' and 'expertise in internal decision-making framework' makes more sense. I suppose I also rail against this being the org's agents, and don't seem to be the team's / pro's agents. Vibes of having an automated political officer in your unit. -[ ] explore nature and examples of AI agents better for within individual pro scope #ontwikkelingspelen #netag #30mins #4hr

  3. Oct 2024
    1. The gap between promise and reality also creates a compelling hype cycle that fuels funding

      The gap is a constant I suspect. In the tech itself, since my EE days, and in people's expectations. Vgl [[Gap tussen eigen situatie en verwachting is constant 20071121211040]]

  4. Jun 2024
    1. you're going to have like 100 million more AI research and they're going to be working at 100 times what 00:27:31 you are

      for - stats - comparison of cognitive powers - AGI AI agents vs human researcher

      stats - comparison of cognitive powers - AGI AI agents vs human researcher - 100 million AGI AI researchers - each AGI AI researcher is 100x more efficient that its equivalent human AI researcher - total productivity increase = 100 million x 100 = 10 billion human AI researchers! Wow!

    2. nobody's really pricing this in

      for - progress trap - debate - nobody is discussing the dangers of such a project!

      progress trap - debate - nobody is discussing the dangers of such a project! - Civlization's journey has to create more and more powerful tools for human beings to use - but this tool is different because it can act autonomously - It can solve problems that will dwarf our individual or even group ability to solve - Philosophically, the problem / solution paradigm becomes a central question because, - As presented in Deep Humanity praxis, - humans have never stopped producing progress traps as shadow sides of technology because - the reductionist problem solving approach always reaches conclusions based on finite amount of knowledge of the relationships of any one particular area of focus - in contrast to the infinite, fractal relationships found at every scale of nature - Supercomputing can never bridge the gap between finite and infinite - A superintelligent artifact with that autonomy of pattern recognition may recognize a pattern in which humans are not efficient and in fact, greater efficiency gains can be had by eliminating us

  5. Nov 2023
    1. that minds are constructed out of cooperating (and occasionally competing) “agents.”

      Vgl how I discussed an application this morning that deployed multiple AI agents as a interconnected network, with each its own role. [[Rolf Aldo Common Ground AI consensus]]

  6. Feb 2021
    1. move away from viewing AI systems as passive tools that can be assessed purely through their technical architecture, performance, and capabilities. They should instead be considered as active actors that change and influence their environments and the people and machines around them.

      Agents don't have free will but they are influenced by their surroundings, making it hard to predict how they will respond, especially in real-world contexts where interactions are complex and can't be controlled.