25 Matching Annotations
  1. Mar 2024
    1. https://rmkn.fr/notes/share/public-interest-technologist

      I listen to this video when I need inspiration and confidence. It reminds me of what is important, that we can change the world, and how I would like it to change.

    1. https://rmkn.fr/notes/share/the-ecosystem-is-moving

      Moxie argues that building centralized services is currently the most practical option due to the challenges of long-term planning in our modern world.

      Despite its flaws, centralization is seen as a necessary step towards eventual decentralization. It excels at reaching a large number of users and familiarizing them with new features and higher standards. It operates swiftly in developing the necessary technology and knowledge required to construct more robust decentralized systems.

      However, there is a concern that Signal, while being a positive initiative for privacy and security, may face obstacles such as funding or political issues that could lead to its disappearance. Therefore, it is crucial to build a power-agnostic alternative that can overcome these obstacles and continue to evolve, ensuring that we do not lose the value that Signal provides.

      The important questions to solve:

      How can we secure long-term funding for communication services that can endure for centuries? How can we ensure that the development and implementation of such services are controlled in a way that maintains high standards for centuries, even in challenging regions?

  2. Nov 2023
    1. It may not be as beautiful as federation, but at this point it seems that it will have to do.

      What if centralized software are just a transition? It was new and magical to just buy a smartphone and gain access to many new services so easily. But the future is not about new fancy services. It will be about having the right ones, respecting fundamental human rights. And sharing ressources control is the core basis of it.

    2. changes are only likely to be possible in centralized environments with more control, rather than less

      We live in a world with changes everywhere, anytime. Too much changes. The important task now is not to make any changes possible faster, but to choose the best ones and take all the time it needs to become reality and win the competition.

    3. Federated services always seem to coalesce around a provider that the bulk of people use

      Companies exploit human behavior by creating products people do want but not for their best. They use massive funding to chase users. They create this faster world you mention, but seems you don't like. I don't like it either. So we must build alternatives that works well, even if it takes decades to build. We may are building foundations for centuries to come.

    4. a climate of uncertainty, never knowing whether things will work or not

      On a scale on years, but what about decades? the world still relies on emails today because it is reliable enough. But we are not sure services like signal or whatsapp will last decades too because of the small size of the control group. There could be funding issues, country specific bad events, or even egocentric personal stuff like what's happening with OpenAI.

  3. Oct 2023
    1. we weren't gifted with that virtuous extra-caring that prominent altruists must have

      what if new generations could be better at this? and they don't know how to assume?

    2. learned not to trust their care-o-meters

      I would say "train our internal care-o-meter" instead of living against it

    3. caring about the world isn't about having a gut feeling that corresponds to the amount of suffering in the world, it's about doing the right thing anyway. Even without the feeling.

      our brain can bypass moral feelings in multiple ways. Our moral feelings are produced by the brain?

    4. instead of just asking his gut how much he cares about de-oiling lots of birds, he shuts up and multiplies.

      what else could we do if we train our brain to ignore the feeling and just act on numbers?

    1. global inequality responsible for much of the world’s suffering

      why have we this problem in the first place?

    2. Theories of change that focus solely on overturning current societal structures generally lack concreteness

      In my opinion, it is nearly impossible for any individual to completely understand and exert complete control over the entire world. Rather, the world progresses through a collective effort shaped by countless individuals. This implies that no one person can directly alter the overall direction, but over time, with sufficient influence, the direction can indeed be subject to change.

    3. if they live in a high-income country, even an average person earning a modest salary is often wealthy compared to the rest of the world

      "What about wealth beyond income? If you have a family without assets or inheritance, and your only source of support is your salary, should you invest in a home to safeguard your family when you can no longer work? How do you manage the high cost of living in wealthy countries? For many families, their entire financial stability relies on their ability to work. Moreover, what can be done about the increasing living costs that surpass income growth? And how do we address jobs with poor conditions that cannot be sustained until retirement?"

    1. use numbers to roughly weigh how much different actions help. The goal is to find the best ways to help, rather than just working to make any difference at all.

      don't look for precise numbers, we are talking about 20/80 porportions here. But sometimes it could be 1/1000 so there is no debate.

    2. Metaculus gave a probability of a Russian invasion of Ukraine of 47% by mid January 2022, and 80% shortly before the invasion on the 24th of February
    3. focusing on the groups who are most neglected, which usually means focusing on those who don’t have as much power to protect their own interests.

      we should not do stuff for them, we should help them make their own stuff and seek for autonomy. they decide what to do, they work by themselves.

    4. what matters is that the world gets better, not that you do it with your own two hands. So people applying effective altruism often try to help indirectly, by empowering others.

      we should remind this kindly regularly to our ego, otherwise our motivations are not lucid

    5. ensure AI systems continue to further human values, even as they become equal (or superior) to humans in their capabilities, is called the AI alignment problem, and solving it requires advances in computer science

      I'm not sure this problem requires even more advanced computer science. On the contrary, I think it requires more advanced social sciences: new techniques to share governance and control over the technology

  4. www.semanticscholar.org www.semanticscholar.org
    1. Openai is looking to predict performance and safety because models are too big to be evaluated directly. To me this implies a high probability that people start to replace their own capabilities with models not enough safe and relevant. It could cause misalignment between people and their environment, or worse their perception of their environment.

    2. model behavior in high-risk areas which require niche expertise to evaluate, as well as assess risksthat will become relevant for very advanced AIs such as power seeking

      but if we use low quality fine tuning and probabilistic performance evaluation, how to assess risks and relevance properly? how much is the risk of believing the model is safe and relevant but in fact no?

    3. undesired behaviors can arise when instructions to labelers were underspecified during reward modeldata collection portion

      we know labelers are many people not trained and paid properly. the task is hard. how much does this affect the behaviors of the model?

    4. the model may alsobecome overly cautious on safe inputs,

      what if people depends on the model for important tasks, but then it decides to ban their input for X reason? what if people forgot how to do stuff because the algorithm does it, but then it decides to stop?

    5. we rely heavily on our modelsthemselves as tools

      what is tools are not well aligned? could misalignment propagate?

    6. accurately predicting future capabilities is important for safety. Going forward weplan to refine these methods and register performance predictions across various capabilities beforelarge model training begins, and we hope this becomes a common goal in the field

      could safety ever be a probabilistic performance estimation? what if safety could still be broken by low probable bad performance?

    7. The post-trainingalignment process results in improved performance on measures of factuality andadherence to desired behavio

      what is the desired behavior exactly? how should it be defined? it seems like a difficult problem