438 Matching Annotations
  1. Oct 2023
    1. It should not be used as a primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text.

      This is true of any of these LLM models actually for any task.

    1. Reinforcement learning uses neural networks to generate a mathematical expression sequentially by adding mathematical symbols from a predefined vocabulary and using the learned policy to decide which notation symbol to be added next140. The mathematical formula is represented as a parse tree. The learned policy takes the parse tree as input to determine what leaf node to expand and what notation (from the vocabulary) to add

      very interesting approach

    2. In chemistry, models such as simplified molecular-input line-entry system (SMILES)-VAE155 can transform SMILES strings, which are molecular notations of chemical structures in the form of a discrete series of symbols that computers can easily understand, into a differentiable latent space that can be optimized using Bayesian optimization techniques (Fig. 3c).

      This could be useful for chemistry research for robotic labs.

    3. Neural operators are guaranteed to be discretization invariant, meaning that they can work on any discretization of inputs and converge to a limit upon mesh refinement. Once neural operators are trained, they can be evaluated at any resolution without the need for re-training. In contrast, the performance of standard neural networks can degrade when data resolution during deployment changes from model training.

      Look this up: anyone familiar with this? sounds complicated but very promising for domains with a large range of resolutions (medical-imaging, wildfire-management)

    4. Standard neural network models can be inadequate for scientific applications as they assume a fixed data discretization. This approach is unsuitable for many scientific datasets collected at varying resolutions and grids.

      Is discretized resolution of neural networks an issue for science?

    5. Applications of symbolic regression in physics use grammar VAEs150. These models represent discrete symbolic expressions as parse trees using context-free grammar and map the trees into a differentiable latent space. Bayesian optimization is then employed to optimize the latent space for symbolic laws while ensuring that the expressions are syntactically valid. In a related study, Brunton and colleagues151 introduced a method for differentiating symbolic rules by assigning trainable weights to predefined basis functions. Sparse regression was used to select a linear combination of the basis functions that accurately represented the dynamic system while maintaining compactness. Unlike equivariant neural networks, which use a predefined inductive bias to enforce symmetry, symmetry can be discovered as the characteristic behaviour of a domain. For instance, Liu and Tegmark152 described asymmetry as a smooth loss function and minimized the loss function to extract previously unknown symmetries. This approach was applied to uncover hidden symmetries in black-hole waveform datasets, revealing unexpected space–time structures that were historically challenging to find.

      This seems very important, even though I only understand half of it. My question is, can similar approaches be used to apply to planning in complex domains or to meaning and truth in language?

    6. to address the difficulties that scientists care about, the development and evaluation of AI methods must be done in real-world scenarios, such as plausibly realizable synthesis paths in drug design217,218, and include well calibrated uncertainty estimators to assess the model’s reliability before transitioning it to real-world implementation

      It's important to move beyond toy models.

    7. However, current transfer-learning schemes can be ad hoc, lack theoretical guidance213 and are vulnerable to shifts in underlying distributions214. Although preliminary attempts have addressed this challenge215,216, more exploration is needed to systematically measure transferability across domains and prevent negative transfer.

      There is still a lot of work to do to know how to best use human knowledge to guide learning systems and how to reuse models in different domains.

    8. Another approach for using neural networks to solve mathematical problems is transforming a mathematical formula into a binary sequence of symbols. A neural network policy can then probabilistically and sequentially grow the sequence one binary character at a time6. By designing a reward that measures the ability to refute the conjecture, this approach can find a refutation to a mathematical conjecture without prior knowledge about the mathematical problem.

      A nice idea to learn a formula of symbols which can be evaluated logically for truth. But do they mention more general approaches such as using SAT solvers for this task? See Vijay Ganesh work.

    9. AI methods have become invaluable when hypotheses involve complex objects such as molecules. For instance, in protein folding, AlphaFold210 can predict the 3D atom coordinates of proteins from amino acid sequences with atomic accuracy, even for proteins whose structure is unlike any of the proteins in the training dataset.

      This is an important category, but it can't apply to all fields and will have a limit to what it can do to move science forward. It's also very dependent on vast computing resources.

    10. Transformer architectures

      Question: what is the inductive bias of Transformers for NLP? Can we define the symmetries that are implicitly leveraged in the architecture.

    11. Such pretrained models96,97,98 with a broad understanding of a scientific domain are general-purpose predictors that can be adapted for various tasks, thereby improving label efficiency and surpassing purely supervised methods8.

      Pre-trained models: these are obviously important and powerful, they almost always work better than training from scratch.

      general-purpose predictors: However, we should be suspicious of accepting this claim that they are general purpose predictors. Why?

      • Have all of the scenarios been tested?
      • Does the system have a general underlying model?
      • Is there some bias in the training and testing data?

      Example: - you pretrain a model on motion of objects on a plane, such a pool table. You learn a very good model to predict movement. - Now, does it work if the table is curved? or even has bumps and imperfections? - Now train it on 3D Netwonian examples, will it predict relativitistic effects? (No)

    12. In the analysis of scientific images, objects do not change when translated in the image, meaning that image segmentation masks are translationally equivariant as they change equivalently when input pixels are translated.

      an example of symmetry

    13. Symmetry is a widely studied concept in geometry69. It can be described in terms of invariance and equivariance (Box 1) to represent the behaviour of a mathematical function, such as a neural feature encoder, under a group of transformations, such as the SE(3) group in rigid body dynamics.

      Symmetry is a very broad concept even beyond geometry, although that is the easiest area to think about. If you are interested, it is worth looking into category theory and symmetry more generally. If you can find a type of symmetry that no one has, for a meaningful categorical/geometric pattern that relates to a real type of data, task or domain, then you might be able to start the next new architecture revolution.

    14. Another strategy for data labelling leverages surrogate models trained on manually labelled data to annotate unlabelled samples and uses these predicted pseudo-labels to supervise downstream predictive models.

      This kind of bootstrapping of human labelling is what made ChatGPT (v3) break through the level of coherence that caused so much excitement in Nov 2022 and afterwards.

      It is also becoming a very common strategy, seemingly replacing an entire industry of full human labelling, with a more focussed process of label-learn-pseudolabel-refine-repeat.

    15. To identify rare events for future scientific enquiry, deep-learning methods18 replace pre-programmed hardware event triggers with algorithms that search for outlying signals to detect unforeseen or rare phenomena

      The importance of filtering out irrelevant data.

    16. Recent findings demonstrate the potential for unsupervised language AI models to capture complex scientific concepts15, such as the periodic table, and predict applications of functional materials years before their discovery, suggesting that latent knowledge regarding future discoveries may be embedded in past publications.

      This is one I often point to and wasn't even using the latest transformer approach to language modelling.

    17. inductive biases (Box 1), which are assumptions representing structure, symmetry, constraints and prior knowledge as compact mathematical statements. However, applying these laws can lead to equations that are too complex for humans to solve, even with traditional numerical methods9. An emerging approach is incorporating scientific knowledge into AI models by including information about fundamental equations, such as the laws of physics or principles of molecular structure and binding in protein folding. Such inductive biases can enhance AI models by reducing the number of training examples needed to achieve the same level of accuracy10 and scaling analyses to a vast space of unexplored scientific hypotheses11.

      Inductive biases: these are becoming more and more critical to understand, and are a good place for academic researchers to focus for new advances, since they don't generally depend on scale or vast amounts of data. These are fundamental insights into the symmetries and structure of a domain, task or architecture.

    18. and coupled with new algorithms

      almost an afterthought here, I would cast it differently, the new algorithms are a major part of it as well.

      Listed algorithm types: * geometric deep learning * self-supervised learning of foundation models * generative models * reinforcement learning

    19. geometric deep learning (Box 1) has proved to be helpful in integrating scientific knowledge, presented as compact mathematical statements of physical relationships, prior distributions, constraints and other complex descriptors, such as the geometry of atoms in molecules

      geometric deep learning : An interesting broad category for graph learning and other methods, is this a common way to refer to this subfield?

    1. Quantitatively, SPRING with GPT-4 outperforms all state-of-the-art RLbaselines, trained for 1M steps, without any training.

      Them's fighten' words!

      I haven't read it yet, but we're putting it on the list for this fall's reading group. Seriously, a strong result with a very strong implied claim. they are careful to say it's from their empirical results, very worth a look. I suspect that amount of implicit knowledge in the papers, text and DAG are helping to do this.

      The Big Question: is their comparison to RL baselines fair, are they being trained from scratch? What does a fair comparison of any from-scratch model (RL or supervised) mean when compared to an LLM approach (or any approach using a foundation model), when that model is not really from scratch.

  2. Sep 2023
  3. Aug 2023
    1. Title: Delays, Detours, and Forks in the Road: Latent State Models of Training Dynamics Authors: Michael Y. Hu1 Angelica Chen1 Naomi Saphra1 Kyunghyun Cho Note: This paper seems cool, using older interpretable machine learning models, graphical models to understand what is going on inside a deep neural network

      Link: https://arxiv.org/pdf/2308.09543.pdf

  4. Jul 2023
    1. “Rung 1.5” Pearl’s ladder of causation [1, 10] ranks structures in a similar way as we do, i.e., increasing amodel’s causal knowledge will yield a higher place upon his ladder. Like Pearl, we have three different levelsin our scale. However, they do not correspond one-to-one.

      They rescale Pearl's ladder levels downwards and define a new scale, arguing that the original definition of counterfactual as a different level on it's own actually combines together mutiple types of added reasoning complexity.

    1. We find empirically that for best-of-n (BoN) sampling

      they foudn this relationship surpsing, but it does seem to fit better than other functions with mimic the general shape.

      question: is tehre. agodo reason why?

    2. RL

      "for ... we don't see any overoptimization, we just see the .. monotonically improves"

      For which, I don't see a linear growth here that might not bend down later.

    1. How LaMDA handles groundedness through interactions with an external information retrieval system

      Does LAmbda always ask these questions? How far down the chain does it go?

    2. Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang,Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, and Quoc V. Le. Towards a human-like open-domain chatbot.arXiv preprint arXiv:2001.09977, 2020

      SSI metric deifnitions

    3. LaMDA Mount Everest provides factsthat could not be attributed to known sources in about 30% of response

      Even with all this work, it will hallucinate about 30% of the time

    1. Because DDPG is an off-policy algorithm, the replay buffer can be large, allowingthe algorithm to benefit from learning across a set of uncorrelated transitions.

      Off-policy algorithms can have a larger replay buffer.

    2. One challenge when using neural networks for reinforcement learning is that most optimization al-gorithms assume that the samples are independently and identically distributed. Obviously, whenthe samples are generated from exploring sequentially in an environment this assumption no longerholds. Additionally, to make efficient use of hardware optimizations, it is essential to learn in mini-batches, rather than online.As in DQN, we used a replay buffer to address these issues

      Motivation for mini-batches of training experiences and for the use of replay buffers for Deep RL.

    3. The DPG algorithm maintains a parameterized actor function μ(s|θμ) which specifies the currentpolicy by deterministically mapping states to a specific action. The critic Q(s, a) is learned usingthe Bellman equation as in Q-learning. The actor is updated by following the applying the chain ruleto the expected return from the start distribution J with respect to the actor parameters:∇θμ J ≈ Est∼ρβ[∇θμ Q(s, a|θQ)|s=st,a=μ(st|θμ)]= Est∼ρβ[∇aQ(s, a|θQ)|s=st,a=μ(st)∇θμ μ(s|θμ)|s=st] (6)Silver et al. (2014) proved that this is the policy gradient, the gradient of the policy’s performance

      The original DPG algorithm (non-deep) takes the Actor-Critic idea and makes the Actor deterministic.

    4. Interestingly, all of our experiments used substantially fewer steps of experience than was used byDQN learning to find solutions in the Atari domain.

      Training with DDPG seems to require less steps/examples than DQN.

    5. The original DPG paper evaluated the algorithm with toy problems using tile-coding and linearfunction approximators. It demonstrated data efficiency advantages for off-policy DPG over bothon- and off-policy stochastic actor critic.

      (non-deep) DPG used tile-coding and linear VFAs.

    6. It can be challenging to learn accurate value estimates. Q-learning, for example, is prone to over-estimating values (Hasselt, 2010). We examined DDPG’s estimates empirically by comparing thevalues estimated by Q after training with the true returns seen on test episodes. Figure 3 shows thatin simple tasks DDPG estimates returns accurately without systematic biases. For harder tasks theQ estimates are worse, but DDPG is still able learn good policies.

      DDPG avoids the over-estimation problem that Q-learning has without using Double Q-learning.

    7. It is not possible to straightforwardly apply Q-learning to continuous action spaces, because in con-tinuous spaces finding the greedy policy requires an optimization of at at every timestep; this opti-mization is too slow to be practical with large, unconstrained function approximators and nontrivialaction spaces

      Why it is not possible for pure Q-learning to handle continuous action spaces.

    8. Our contribution here is to provide modifications to DPG, inspired bythe success of DQN, which allow it to use neural network function approximators to learn in largestate and action spaces online

      contribution of this paper.

    9. As with Q learning, introducing non-linear function approximators means that convergence is nolonger guaranteed. However, such approximators appear essential in order to learn and generalizeon large state spaces.

      Why Q-learning can't have guaranteed convergence.

    10. A major challenge of learning in continuous action spaces is exploration. An advantage of off-policies algorithms such as DDPG is that we can treat the problem of exploration independentlyfrom the learning algorithm.

      Learning and Exploration are handled seperately.

    11. his simple change moves the relatively unstable problem oflearning the action-value function closer to the case of supervised learning, a problem for whichrobust solutions exist.
    12. One approach to this problem is to manually scale the features so they are in similar ranges acrossenvironments and units. We address this issue by adapting a recent technique from deep learningcalled batch normalization
    13. This paper introduces the DDPG algorithm which builds on the existing DPG algorithm from classic RL theory. The main idea is to define a deterministic policy, or nearly deterministic, for situations where the environment is very sensitive to suboptimal actions, and one action setting usually dominates in each state. This showed good performance, but could not beat algorithms such as PPO until the additions of SAC were added. SAC adds an entropy penalty which essentially penalizes uncertainty in any states. Using this, the deterministic policy gradient approach performs well.

    1. IMPALA (Figure 1) uses an actor-critic setup to learn apolicy π and a baseline function V π . The process of gener-ating experiences is decoupled from learning the parametersof π and V π . The architecture consists of a set of actors,repeatedly generating trajectories of experience, and one ormore learners that use the experiences sent from actors tolearn π off-policy.
    2. We are interested in developing new methodscapable of mastering a diverse set of tasks simultaneously aswell as environments suitable for evaluating such methods.

      Task: train agents that can do more than one thing.

    1. Q-learning(Watkins, 1989) is one of the most popular reinforcementlearning algorithms, but it is known to sometimes learn un-realistically high action values because it includes a maxi-mization step over estimated action values, which tends toprefer overestimated to underestimated values

      Q-learning tends to overestimate the value of an action.

    2. show overestimationscan occur when the action values are inaccurate, irrespectiveof the source of approximation error

      They show overestimations occur when there is approximation error in the value function approximation for Q(s,a).

    3. In the original Double Q-learning algorithm, two valuefunctions are learned by assigning each experience ran-domly to update one of the two value functions, such thatthere are two sets of weights, θ and θ′
    4. The orange bars show the bias in a single Q-learning update when the action values are Q(s, a) =V∗(s) + a and the errors {a}ma=1 are independent standardnormal random variables. The second set of action valuesQ′, used for the blue bars, was generated identically and in-dependently. All bars are the average of 100 repetitions.
    1. Hyperparameters

      hyperparameters: alpha, dropbox prob, number of layers in your network, width of network layers, activation function (RELU, ELU, tanh, ...), CNN?, RNN?, ..., , epsilon (for e-greedy policy)

      parameters: specific to problem - paramters of Q(S,a) and policy pi (theta, w), gamma (? how important is the future)

    1. objective function (the “surrogate” objective) is maximized

      PPO is a response to the TRPO algorithm, trying to use the core idea but implement a more efficient and simpler algorithm.

      TRPO defines the problem as a straight optimization problem, no learning is actually involved.

    2. Without a constraint, maximization of LCP I would lead to an excessively large policyupdate; hence, we now consider how to modify the objective, to penalize changes to the policy thatmove rt(θ) away from 1

      The policy iteration objective proposes steps which are too large. It uses a likelihood ratio of the current policy with and older version of the policy multiplied by the Advantage function. So, it uses the change in the policy probability for an action to weight the Advantage function.

    3. A proximal policy optimization (PPO) algorithm that uses fixed-length trajectory segments isshown below. Each iteration, each of N (parallel) actors collect T timesteps of data. Then weconstruct the surrogate loss on these N T timesteps of data, and optimize it with minibatch SGD
    4. Thefirst term inside the min is LCP I . The second term, clip(rt(θ), 1 − , 1 + ) ˆAt, modifies the surrogateobjective by clipping the probability ratio, which removes the incentive for moving rt outside of theinterval [1 − , 1 + ]. Finally, we take the minimum of the clipped and unclipped objective, so thefinal objective is a lower bound (i.e., a pessimistic bound) on the unclipped objective

      The "clip" function cuts off the probability ratio output so that some changes in Advantage are ignored.

    5. Wecan see that LCLIP is a lower bound on LCP I , with a penalty for having too large of a policyupdate

      The clipped loss is a lower bound on the actual loss defined in TRPO. So it is simpler to compute, and will provide some guidance at least, it will never overestimate the true loss.

    6. hese methods havethe stability and reliability of trust-region methods but are much simpler to implement, requiringonly few lines of code change to a vanilla policy gradient implementation, applicable in more generalsetting
    7. Surrogate objectives, as we interpolate between the initial policy parameter θold, and the updatedpolicy parameter, which we compute after one iteration of PPO.

      Another figure to show intuition for the approach by showing how each component changes with respect to following the policy update along the gradient direction.

    1. Noisy Nets. The limitations of exploring using -greedypolicies are clear in games such as Montezuma’s Revenge,where many actions must be executed to collect the first re-ward
    2. This famous paper gives a great review of the DQN algorithm a couple years after it changed everything in Deep RL. It compares six different extensions to DQN for Deep Reinforcement Learning, many of which have now become standard additions to DQN and other Deep RL algorithms. It also combines all of them together to produce the "rainbow" algorithm, which outperformed many other models for a while.

    1. For many tasks our models exhibit human-level performance, and we are the first to report computer agents that can craftdiamond tools, which can take proficient humans upwards of 20 minutes (24,000environment actions) of gameplay to accomplish
    2. e extend the internet-scalepretraining paradigm to sequential decision domains through semi-supervisedimitation learning wherein agents learn to act by watching online unlabeled videos.
    1. urriculum learning (at least using current RL methods) is that the agentachieves a small success probability (within available/reasonable compute) on a new task aftermastering a previous task.

      Curriculum Learning

    2. We study curriculum learning on a set of goal-conditioned Minecraft tasks, in which the agent istasked to collect one out of a set of 107 items from the Minecraft tech tree
    3. It has 5 minutes (1500 time steps) to complete the task and obtains areward of +1 upon success. After each success or failure a new task is selected without resettingthe world or respawning the agent
      • Agent has 5 min to find item
      • Next item chosen without resetting world
    1. Liang, Machado, Talvite, Bowling - AAMAS 2016 "State of the Art Control of Atari Games Using Shallow Reinforcement Learning"

      Response paper to DQN showing that well designed Value Function Approximations can also do well at these complex tasks without the use of Deep Learning

      A great paper showing how to think differently about the latest advances in Deep RL. All is not always what it seems!

    1. Few-Shot (FS) - the model is given a few demonstrations of the task at inference time asconditioning [ RWC+19 ], but no weights are updated

      hints are given but the model is not updated

  5. Jun 2023
    1. [KMH+20] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess,Rewon Child, Scott Gray, Alec Ra

      Justification for low learning rate in large language models.

    2. As found in [ KMH+20 , MKAT18 ], larger models can typically use a larger batch size, but requirea smaller learning rate. We measure the gradient noise scale during training and use it to guideour choice of batch size [MKAT18 ]. Table A.1 shows the parameter settings we used. To train thelarger models without running out of memory, we use a mixture of model parallelism within eachmatrix multiply and model parallelism across the layers of the network. All models were trained onV100 GPU’s on part of a high-bandwidth cluster. Details of the training process and hyperparametersettings are described in the appendix.

      Why is this?

    1. While zero-shot performance establishes a baseline of thepotential performance of GPT-2 on many tasks, it is notclear where the ceiling is with finetuning.

      So finetuning could lead to better models.

    2. The Bloom filterswere constructed such that the false positive rate is upperbounded by 1108 . We further verified the low false positiverate by generating 1M strings, of which zero were found bythe filter

      Bloom filters used to determine how much overlap there is between train and test set, to be more sure of their results.

    3. Recent work in computer vision has shown that common im-age datasets contain a non-trivial amount of near-duplicateimages. For instance CIFAR-10 has 3.3% overlap betweentrain and test images (Barz & Denzler, 2019). This results inan over-reporting of the generalization performance of ma-chine learning systems.

      CIFAR-10 performance results are overestimates since some of the training data is essentially in the test set.

  6. Apr 2023
    1. In this way, an NTM can be thought of as simultaneously exploring all computational possibilities in parallel and selecting an accepting branch

      Non-deterministic Turing Machines are able to get lucky and choose the single path to the answer in polynomial time, or be given a "hint" or "proof" or "certificate" for that path. This isn't realistic, but it separates the difficulty of the problem of verifying a solution and finding one into two different tasks.

    2. Computational problems[edit] Intuitively, a computational problem is just a question that can be solved by an algorithm. For example, "is the natural number n {\displaystyle n} prime?" is a computational problem. A computational problem is mathematically represented as the set of answers to the problem. In the primality example, the problem (call it PRIME {\displaystyle {\texttt {PRIME}}} ) is represented by the set of all natural numbers that are prime: PRIME = { n ∈ N | n  is prime } {\displaystyle {\texttt {PRIME}}=\{n\in \mathbb {N} |n{\text{ is prime}}\}} . In the theory of computation, these answers are represented as strings; for example, in the primality example the natural numbers could be represented as strings of bits that represent binary numbers. For this reason, computational problems are often synonymously referred to as languages, since strings of bits represent formal languages (a concept borrowed from linguistics); for example, saying that the PRIME {\displaystyle {\texttt {PRIME}}} problem is in the complexity class NP is equivalent to saying that the language PRIME {\displaystyle {\texttt {PRIME}}} is in NP.

      Explanation of why computational complexity class proofs with Turing Machines use "strings" instead of algorithms or programs.

    1. Presburger arithmetic is much weaker than Peano arithmetic, which includes both addition and multiplication operations. Unlike Peano arithmetic, Presburger arithmetic is a decidable theory. This means it is possible to algorithmically determine, for any sentence in the language of Presburger arithmetic, whether that sentence is provable from the axioms of Presburger arithmetic. The asymptotic running-time computational complexity of this algorithm is at least doubly exponential, however, as shown by Fischer & Rabin (1974).

      This is an example of a decision problem that is at least doubly exponential \(2^{2^n}\). It is a simpler form of arithmetic where the Halting problem/incompleteness theorem does not apply.

    1. At present, all known algorithms for NP-complete problems require time that is superpolynomial in the input size, in fact exponential in O ( n k ) {\displaystyle O(n^{k})} [clarify] for some k > 0 {\displaystyle k>0} and it is unknown whether there are any faster algorithms.

      So how hard are NP-complete problems?

    2. The Subgraph Isomorphism problem is NP-complete. The graph isomorphism problem is suspected to be neither in P nor NP-complete, though it is in NP. This is an example of a problem that is thought to be hard, but is not thought to be NP-complete. This class is called NP-Intermediate problems and exists if and only if P≠NP.

      There might even be some problems in NP but not in P and that are not NP-complete.

    1. We used GPT-4 to help create training data for model fine-tuning and iterate on classifiers across training, evaluations, and monitoring.

      Interesting, you need to consider, is this like data augmentation, like bootstrapping, like adversarial training, or is it like overfitting to your data?

  7. Mar 2023
  8. Feb 2023
    1. Definition 3.2 (simple reward machine).

      The MDP does not change, it's dynamics are the same, with or without the RM, as they are with or without a standard reward model. Additionally, the rewards from the RM can be non-Markovian with respect to the MDP because they inherently have a kind of memory or where you've been, limited to the agents "movement" (almost "in it's mind") about where it is along the goals for this task.