21 Matching Annotations
  1. Jul 2021
    1. then you needed a four-step chain to get any taxes

      Each step has issues that propagate , loss of information, corruption opportunities etc.

    2. The first part of the story is High Modernism, an aesthetic taste masquerading as a scientific philosophy. The High Modernists claimed to be about figuring out the most efficient and high-tech way of doing things, but most of them knew little relevant math or science and were basically just LARPing being rational by placing things in evenly-spaced rectangular grids.

      This is quite a prevalent problem online - people who use their words confidently about a topic you have little knowledge on can sway you more easily than you think!

    1. These questions and tasks, which seem complicated to us, would sound to a superintelligent system like someone asking you to improve upon the “My pencil fell off the table” situation, which you’d do by picking it up and putting it back on the table.

      To me this comes down to whether a ASI could or would write its own code such that it experiences things like fun or boredom. Without this idea of a computer constantly 'experiencing' things (given a constant stream of data), I see no reason for it to have any agency.

    2. A machine on the second-to-highest step on that staircase would be to us as we are to ants—it could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless

      Touching upon an earlier note - if we can expect an ASI to provide solutions to all of our problems like providing a grand unified theory in physics or making humans functionally immortal, can we expect an ASI to create a means to interact with and improve our brains directly ?

    3. There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying

      While our 'hardware' is exceptionally good, we have similarly great 'software'.

    1. Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim—and this might happen in the next few decades

      and, with the ability to self-"improve" by rewriting its own code makes this future quite scary.

    2. This is called an Intelligence Explosion,11 and it’s the ultimate example of The Law of Accelerating Returns

      Over the time of our exponential growth our base intelligence remained a (more or less) constant. An AI with an exponentially increasing intelligence with access to our technology would - as it says - be extremely explosive.

    3. And given the advantages over us that even human intelligence-equivalent AGI would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence

      The ability to self improve would ( much like our own growth progress ) lead to a runaway chain reaction of improvements.

    4. Editability, upgradability, and a wider breadth of possibility. Unlike the human brain, computer software can receive updates and fixes and can be easily experimented on. The upgrades could also span to areas where human brains are weak. Human vision software is superbly advanced, while its complex engineering capability is pretty low-grade. Computers could match the human on vision software but could also become equally optimized in engineering and any other area

      In the same way computer software can receive updates, what's to say the human brain cannot do so too with technology? It was mentioned earlier in the article engineers may be able to map the human brain - the idea of editing it should not be so easily dismissed in my opinion.

    5. it would just need to learn and gather information. If engineers get really good, they’d be able to emulate a real brain with such exact accuracy that the brain’s full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?), which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which he’d probably be really excited about

      This is such a crazy concept - would the emulation of a brain need to be hooked up to an emulation of a body to work? Maybe this is how we can drive independent action of an AI - incorporating an experience of things like hunger or thirst to drive actions.

    6. But when it’s told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when it’s told it was wrong, those pathways’ connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, we’re discovering ingenious new ways to take advantage of neural circuitry

      The brain reads text at high speeds similarly to how we prepare a text data set for natural language processing, like removing ( or skipping over ) stop words - words with little importance in the sentence like 'of' and 'the'.

    7. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts)

      This stat is unreal - really solidifies just how impressive the human brain is. I wonder if other animals have a comparable cps to humans, and maybe our 'code' is geared more towards intelligence while other animals have 'code' geared towards their own specialisations.

    8. Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can

      This seems to be one of the most difficult problems in technology. It is much easier to specialise something than generalise something. I understand how machines can make decisions, and also how they can make decisions based on data off of previous decisions. I'm not sure how we can give a machine a 'drive' - the ability to choose what decisions to make independently.

    9. When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, “That’s stupid—if there’s one thing I know from history, it’s that everybody dies.” And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either

      This is similar to issues that can be encountered with sequential memory in machine learning algorithms - the difference is we can implement solutions to these issues much easier than we can rewire our brains, in my opinion. Just another display of the uses of machine learning.

    10. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front.

      It seems that rapid growth is driven by new technologies and the levelling off is due to us implementing this technology as well as conditioning the average person to it.

    11. 1. Slow growth (the early phase of exponential growth)

      This reminds me of what Gab Leydon says in the Invest Like the Best podcast - he believes our innovation may hit a human limit. The rapid growth he believes in will be using AI to drive innovation.

    12. 2. Rapid growth (the late, explosive phase of exponential growth)

      Maybe our next rapid growth will be when the metaverse is prevalent?

    13. 1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. It’s most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. They’d be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than they’re moving now.

      This graph shows that if we predict based on our current rate of improvement ( continuing to plot our growth in a straight line with the gradient as the current rate of growth ) we arrive at an inaccurate prediction.

    14. If Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015—i.e. the next DPU might only take a couple decades—and the world in 2050 might be so vastly different than today’s world that we would barely recognize it. This isn’t science fiction. It’s what many scientists smarter and more knowledgeable than you or I firmly believe—and if you look at history, it’s what we should logically predict

      When we look to the future, we must take into account this law of accelerating returns to predict progress with more accuracy. It is only logical that since our progress is exponential, the amount of time taken for a certain amount of progress would decrease exponentially.

    15. But here’s the interesting thing—if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, he’d take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of things—but he wouldn’t die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, he’d be impressed with how committed Europe turned out to be with that new imperialism fad, and he’d have to do some major revisions of his world map conception. But watching everyday life go by in 1750—transportation, communication, etc.—definitely wouldn’t make him die. No, in order for the 1750 guy to have as much fun as we had with him, he’d have to go much farther back—maybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer world—from a time when humans were, more or less, just another animal species—saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being “inside,” and their enormous mountain of collective, accumulated human knowledge and discovery—he’d likely die. And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, he’d show the guy everything and the guy would be like, “Okay what’s your point who cares.” For the 12,000 BC guy to have the same fun, he’d have to go back over 100,000 years and get someone he could show fire and language to for the first time.

      The main takeaway here is that human progress has been an exponential increase.