11 Matching Annotations
  1. Sep 2022
  2. Mar 2022
    1. X clipboard The X window system has its own clipboard. It is also known as a cutbuffer. Any text or content you mark by highlighting with the mouse cursor is automatically copied to this clipboard. This is known as the PRIMARY selection or X Window selection or just selection in X jargon. When you middle-click the mouse cursor at the destination location, this copied content is pasted there.
    1. In 13.10, Shift+Insert pastes from the selection buffer (the thing that selecting text writes to). In Libre Office, Chrome, and Firefox, Shift+Insert pastes from the clipboard. I would thus like to configure gnome-terminal to do the same.
    1. While this isn't a solution, hopefully this explanation will make it clear WHY. In Ubuntu there are two clipboards at work. One, which everyone is familiar with, the freedesktop.org clipboard (captures Ctrl+C command) The second is a clipboard manager that has been at play since before Ubuntu even existed - X11. The X Server (X11) manages three other clipboards: Primary Selection, Secondary Selection, and Clipboard. When you select text with your pointer it gets copied to a buffer in the XServer, the Primary Selection, and awaits pasting by means of the Mouse 3 button. The other two were designed to be used by other applications in a means to share a common clipboard between applications. In this case the freedesktop.org clipboard manager in Ubuntu already does this for us.
  3. Oct 2021
    1. For users that prefer using a side-window for the org-roam buffer, the following example configuration should provide a good starting point:
    2. org-roam-buffer-display-dedicated: Launch an Org-roam buffer for a specific node without visiting its file. Unlike org-roam-buffer-toggle you can have multiple such buffers and their content won’t be automatically replaced with a new node at point.
    3. org-roam-buffer-toggle: Launch an Org-roam buffer that tracks the node currently at point. This means that the content of the buffer changes as the point is moved, if necessary.
  4. May 2021
  5. Jan 2021
  6. Dec 2019
    1. SDS in loading buffer may contribute to band smearing in precast GelRed® gels. If this occurs, we recommend using the post-staining protocol
  7. May 2017
    1. Optimum buffer size is related to a number of things: file system block size, CPU cache size and cache latency. Most file systems are configured to use block sizes of 4096 or 8192. In theory, if you configure your buffer size so you are reading a few bytes more than the disk block, the operations with the file system can be extremely inefficient (i.e. if you configured your buffer to read 4100 bytes at a time, each read would require 2 block reads by the file system). If the blocks are already in cache, then you wind up paying the price of RAM -> L3/L2 cache latency. If you are unlucky and the blocks are not in cache yet, the you pay the price of the disk->RAM latency as well. This is why you see most buffers sized as a power of 2, and generally larger than (or equal to) the disk block size. This means that one of your stream reads could result in multiple disk block reads - but those reads will always use a full block - no wasted reads. Now, this is offset quite a bit in a typical streaming scenario because the block that is read from disk is going to still be in memory when you hit the next read (we are doing sequential reads here, after all) - so you wind up paying the RAM -> L3/L2 cache latency price on the next read, but not the disk->RAM latency. In terms of order of magnitude, disk->RAM latency is so slow that it pretty much swamps any other latency you might be dealing with. So, I suspect that if you ran a test with different cache sizes (haven't done this myself), you will probably find a big impact of cache size up to the size of the file system block. Above that, I suspect that things would level out pretty quickly. There are a ton of conditions and exceptions here - the complexities of the system are actually quite staggering (just getting a handle on L3 -> L2 cache transfers is mind bogglingly complex, and it changes with every CPU type). This leads to the 'real world' answer: If your app is like 99% out there, set the cache size to 8192 and move on (even better, choose encapsulation over performance and use BufferedInputStream to hide the details). If you are in the 1% of apps that are highly dependent on disk throughput, craft your implementation so you can swap out different disk interaction strategies, and provide the knobs and dials to allow your users to test and optimize (or come up with some self optimizing system).

      What's the cache size to keep when reading from file to a buffer?