- Last 7 days
-
www.linkedin.com www.linkedin.com
-
Arle LommelArle Lommel • Following • Following Senior Analyst at CSA ResearchSenior Analyst at CSA Research 3d • Edited • 3 days ago One of the most interesting aspects of writing about AI and LLMs right now is that if I say anything remotely positive, some people will accuse me of being a shill for Big AI. If I say anything remotely negative, others will accuse me of being insufficiently aware of the progress AI has made.So I will put out a few personal statements about AI that might clarify where I am on this:1. AI is not intelligent, at least not in the human sense of the word. It is a sophisticated tool for drawing inference from binary data and thus operates *below* a symbolic level.2. AI, at least in the guise of LLMs, is not going to achieve artificial general intelligence (AGI) now or in the future.3. AI is getting much better at *approximating* human behavior on a wide variety of tasks. It can be extremely useful without being intelligent, in the same way that an encyclopedia can be very useful without being intelligent.4. For some tasks – such as translating between two languages – LLMs sometimes perform better than some humans perform. They do not outperform the best humans. This poses a significant challenge for human workers that we (collectively) have yet to address: Lower-skilled workers and trainees in particular begin to look replaceable, but we aren’t yet grappling with what happens when we replace them so they never become the experts we need for the high end. I think the decimation of the pipeline for some sectors is a HUGE unaddressed problem.5. “Human parity” is a rather pointless metric for evaluating AI. It far exceeds human parity in some areas – such as throughput, speed, cost, and availability – while it falls far short in other areas. A much more interesting question is “where do humans and machines have comparative advantage and how can we combine the two in ways that elevate the human?”6. Human-in-the-loop (HitL) is a terrible model. Having humans – usually underpaid and overworked – acting in a janitorial role to clean up AI messes is a bad use of their skill and knowledge. That’s why we prefer augmentation models, what we call “human at the core,” where humans maintain control. To see why one is better, imagine if you applied an HitL model to airline piloting, and the human only stepped in when the plane was in trouble (or even after it crashed). Instead, with airline piloting, we have the pilot in charge and assisted by automation to remain safe.7. AI is going to get better than it is now, but improvements in the core technology are slowing down and will increasingly be incremental. However, experience with prompting and integrating data will continue to drive improvements based on humans’ ability to “trick” the systems into doing the right things.8. Much of the value from LLMs for the language sector will come from “translation adjacent” tasks – summarization, correcting formality, adjusting reading levels, checking terminology, discovering information, etc. – tasks that are typically not paid well.
Arle Lommel Senior Analyst at CSA ResearchSenior Analyst at CSA Research
One of the most interesting aspects of writing about AI and LLMs right now is that if I say anything remotely positive, some people will accuse me of being a shill for Big AI. If I say anything remotely negative, others will accuse me of being insufficiently aware of the progress AI has made.
So I will put out a few personal statements about AI that might clarify where I am on this:
-
AI is not intelligent, at least not in the human sense of the word. It is a sophisticated tool for drawing inference from binary data and thus operates below a symbolic level.
-
AI, at least in the guise of LLMs, is not going to achieve artificial general intelligence (AGI) now or in the future.
-
AI is getting much better at approximating human behavior on a wide variety of tasks. It can be extremely useful without being intelligent, in the same way that an encyclopedia can be very useful without being intelligent.
-
For some tasks – such as translating between two languages – LLMs sometimes perform better than some humans perform. They do not outperform the best humans. This poses a significant challenge for human workers that we (collectively) have yet to address: Lower-skilled workers and trainees in particular begin to look replaceable, but we aren’t yet grappling with what happens when we replace them so they never become the experts we need for the high end. I think the decimation of the pipeline for some sectors is a HUGE unaddressed problem.
-
“Human parity” is a rather pointless metric for evaluating AI. It far exceeds human parity in some areas – such as throughput, speed, cost, and availability – while it falls far short in other areas. A much more interesting question is “where do humans and machines have comparative advantage and how can we combine the two in ways that elevate the human?”
-
Human-in-the-loop (HitL) is a terrible model. Having humans – usually underpaid and overworked – acting in a janitorial role to clean up AI messes is a bad use of their skill and knowledge. That’s why we prefer augmentation models, what we call “human at the core,” where humans maintain control. To see why one is better, imagine if you applied an HitL model to airline piloting, and the human only stepped in when the plane was in trouble (or even after it crashed). Instead, with airline piloting, we have the pilot in charge and assisted by automation to remain safe.
-
AI is going to get better than it is now, but improvements in the core technology are slowing down and will increasingly be incremental. However, experience with prompting and integrating data will continue to drive improvements based on humans’ ability to “trick” the systems into doing the right things.
-
Much of the value from LLMs for the language sector will come from “translation adjacent” tasks – summarization, correcting formality, adjusting reading levels, checking terminology, discovering information, etc. – tasks that are typically not paid well.
-
-
- Mar 2024
-
www.facebook.com www.facebook.com
-
詹益鑑 Verified account · 45m · Shared with PublicAI 真的取代了一些工作嗎?或者造成一些工作的薪資降低?今天看到這篇實際分析的文章,從2022 年11 月1 日(ChatGPT 發布前一個月)到2024 年2 月14 日,在 Upwork 的自由工作者資料中,分析出幾個事實:1. 下降幅度最大的 3 個類別是寫作、翻譯和客戶服務工作。寫作職位數量下降了 33%,翻譯職位數量下降了 19%,客戶服務職位數量下降了 16%2. 影片編輯/製作工作成長了 39%,圖形設計工作成長了 8%,網頁設計工作成長了 10%。軟體開發職缺也有所增加,其中後端開發職缺成長了 6%,前端/Web 開發職缺成長了 4%3. 翻譯絕對是受打擊最嚴重的工作,每小時工資下降了 20% 以上,其次是影片編輯/製作和市場研究。平面設計和網頁設計工作是最具彈性的。兩人不僅數量增加了,而且時薪也增加了一些。4. 自 ChatGPT 和 OpenAI API 發布以來,與開發聊天機器人相關的工作數量激增了 2000%。如果說當今人工智慧有一個殺手級用例,那就是開發聊天機器人。
下降幅度最大的 3 個類別是寫作、翻譯和客戶服務工作
寫作職位數量下降了 33%,翻譯職位數量下降了 19%,客戶服務職位數量下降了 16%
翻譯絕對是受打擊最嚴重的工作,每小時工資下降了 20% 以上
-