- Jan 2023
-
arxiv.org arxiv.org
-
Feng, 2022. "Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis"
Shared and found via: Gowthami Somepalli @gowthami@sigmoid.social Mastodon > Gowthami Somepalli @gowthami StructureDiffusion: Improve the compositional generation capabilities of text-to-image #diffusion models by modifying the text guidance by using a constituency tree or a scene graph.
-
-
arxiv.org arxiv.org
-
"Talking About Large Language Models" by Murray Shanahan
-
- Dec 2022
-
arxiv.org arxiv.org
-
Lee et. al. - NeurIPS 2022 "Multi-Game Decision Transformers"
-
-
arxiv.org arxiv.org
-
"Decision Transformer: Reinforcement Learning via Sequence Modeling" (Chen, NeurIPS, 2021)
Quickly a very influential paper with a new idea of how to learn generative models of action prediction using SARSA training from demonstration trajectories. No optimization of actions or rewards, but target reward is an input.
-
- Nov 2022
-
arxiv.org arxiv.org
-
we propose the Transformer, a model architecture eschewing recurrence and insteadrelying entirely on an attention mechanism to draw global dependencies between input and output.The Transformer allows for significantly more parallelization a
Using the attention mechanism to determine global dependencies between input and output instead of using recurrent links to past states. This is the essence of their new idea.
-
- Sep 2022
-
arxiv.org arxiv.org
-
We study whether sequence modelingcan perform policy optimization by evaluating Decision Transformer on offline RL benchmarks
-
- Feb 2022
-
www.supercoloring.com www.supercoloring.com
-
Paper Transformer Toys Templates
-