796 Matching Annotations
  1. Last 7 days
    1. Erstmals wurde genau erfasst, welcher Teil der von Waldbränden betroffenen Gebiete sich auf die menschlich verursachte Erhitzung zurückführen lässt. Er wächst seit 20 Jahren deutlich an. Insgesamt kompensieren die auf die Erhitzung zurückgehenden Waldbrände den Rückgang an Bränden durch Entwaldung. Der von Menschen verursachte – und für die Berechnung von Schadensansprüchen relevante – Anteil der CO2-Emissione ist damit deutlich höher als bisher angenommen https://www.carbonbrief.org/climate-change-almost-wipes-out-decline-in-global-area-burned-by-wildfires/

  2. Oct 2024
    1. if (dtc->wb_thresh < 2 * wb_stat_error()) { wb_reclaimable = wb_stat_sum(wb, WB_RECLAIMABLE); dtc->wb_dirty = wb_reclaimable + wb_stat_sum(wb, WB_WRITEBACK); } else { wb_reclaimable = wb_stat(wb, WB_RECLAIMABLE); dtc->wb_dirty = wb_reclaimable + wb_stat(wb, WB_WRITEBACK); }

      This is a configuration policy that does a more accurate calculation on the number of reclaimable pages and dirty pages when the threshold for the dirty pages in the writeback context is lower than 2 times the maximal error of a stat counter.

    2. static long wb_min_pause(struct bdi_writeback *wb, long max_pause, unsigned long task_ratelimit, unsigned long dirty_ratelimit, int *nr_dirtied_pause

      This function is an algorithmic policy that determines the minimum throttle time for a process between consecutive writeback operations for dirty pages based on heuristics. It is used for balancing the load of the I/O subsystems so that there will not be excessive I/O operations that impact the performance of the system.

    3. if (!laptop_mode && nr_reclaimable > gdtc->bg_thresh && !writeback_in_progress(wb)) wb_start_background_writeback(wb);

      This is a configuration policy that determines whether to start background writeout. The code here indicates that if laptop_mode, which will reduce disk activity for power saving, is not set, then when the number of dirty pages reaches the bg_thresh threshold, the system starts writing back pages.

    4. if (thresh > dirty) return 1UL << (ilog2(thresh - dirty) >> 1);

      This implements a configuration policy that determines the interval for the kernel to wake up and check for dirty pages that need to be written back to disk.

    5. limit -= (limit - thresh) >> 5;

      This is a configuration policy that determines how much should the limit be updated. The limit controls the amount of dirty memory allowed in the system.

    6. if (dirty <= dirty_freerun_ceiling(thresh, bg_thresh) && (!mdtc || m_dirty <= dirty_freerun_ceiling(m_thresh, m_bg_thresh))) { unsigned long intv; unsigned long m_intv; free_running: intv = dirty_poll_interval(dirty, thresh); m_intv = ULONG_MAX; current->dirty_paused_when = now; current->nr_dirtied = 0; if (mdtc) m_intv = dirty_poll_interval(m_dirty, m_thresh); current->nr_dirtied_pause = min(intv, m_intv); break; } /* Start writeback even when in laptop mode */ if (unlikely(!writeback_in_progress(wb))) wb_start_background_writeback(wb); mem_cgroup_flush_foreign(wb); /* * Calculate global domain's pos_ratio and select the * global dtc by default. */ if (!strictlimit) { wb_dirty_limits(gdtc); if ((current->flags & PF_LOCAL_THROTTLE) && gdtc->wb_dirty < dirty_freerun_ceiling(gdtc->wb_thresh, gdtc->wb_bg_thresh)) /* * LOCAL_THROTTLE tasks must not be throttled * when below the per-wb freerun ceiling. */ goto free_running; } dirty_exceeded = (gdtc->wb_dirty > gdtc->wb_thresh) && ((gdtc->dirty > gdtc->thresh) || strictlimit); wb_position_ratio(gdtc); sdtc = gdtc; if (mdtc) { /* * If memcg domain is in effect, calculate its * pos_ratio. @wb should satisfy constraints from * both global and memcg domains. Choose the one * w/ lower pos_ratio. */ if (!strictlimit) { wb_dirty_limits(mdtc); if ((current->flags & PF_LOCAL_THROTTLE) && mdtc->wb_dirty < dirty_freerun_ceiling(mdtc->wb_thresh, mdtc->wb_bg_thresh)) /* * LOCAL_THROTTLE tasks must not be * throttled when below the per-wb * freerun ceiling. */ goto free_running; } dirty_exceeded |= (mdtc->wb_dirty > mdtc->wb_thresh) && ((mdtc->dirty > mdtc->thresh) || strictlimit); wb_position_ratio(mdtc); if (mdtc->pos_ratio < gdtc->pos_ratio) sdtc = mdtc; }

      This is an algorithmic policy that determines whether the process can run freely or a throttle is needed to control the rate of the writeback by checking if the number of dirty pages exceed the average of the global threshold and background threshold.

    7. shift = dirty_ratelimit / (2 * step + 1); if (shift < BITS_PER_LONG) step = DIV_ROUND_UP(step >> shift, 8); else step = 0; if (dirty_ratelimit < balanced_dirty_ratelimit) dirty_ratelimit += step; else dirty_ratelimit -= step;

      This is a configuration policy that determines how much we should increase/decrease the dirty_ratelimit, which controls the rate that processors write dirty pages back to storage.

    8. ratelimit_pages = dirty_thresh / (num_online_cpus() * 32); if (ratelimit_pages < 16) ratelimit_pages = 16;

      This is a configuration policy that dynamically determines the rate that kernel can write dirty pages back to storage in a single writeback cycle.

    9. t = wb_dirty / (1 + bw / roundup_pow_of_two(1 + HZ / 8));

      This implements a configuration policy that determines the maximum time that the kernel should wait between writeback operations for dirty pages. This ensures that dirty pages are flushed to disk within a reasonable time frame and control the risk of data loss in case of a system crash.

    10. if (IS_ENABLED(CONFIG_CGROUP_WRITEBACK) && mdtc) {

      This is a configuration policy that controls whether to update the limit in the control group. The config enables support for controlling the writeback of dirty pages on a per-cgroup basis in the Linux kernel. This allows for better resource management and improved performance.

    1. if (si->cluster_info) { if (!scan_swap_map_try_ssd_cluster(si, &offset, &scan_base)) goto scan; } else if (unlikely(!si->cluster_nr--)) {

      Algorithmic policy decision: If SSD, use SSD wear-leveling friendly algorithm. Otherwise, use HDD algo which minimizes seek times. Potentially, certain access patterns may make one of these algorithms less effective (i.e. in the case of wear leveling, the swap is constantly full)

    2. scan_base = offset = si->lowest_bit; last_in_cluster = offset + SWAPFILE_CLUSTER - 1; /* Locate the first empty (unaligned) cluster */ for (; last_in_cluster <= si->highest_bit; offset++) { if (si->swap_map[offset]) last_in_cluster = offset + SWAPFILE_CLUSTER; else if (offset == last_in_cluster) { spin_lock(&si->lock); offset -= SWAPFILE_CLUSTER - 1; si->cluster_next = offset; si->cluster_nr = SWAPFILE_CLUSTER - 1; goto checks; } if (unlikely(--latency_ration < 0)) { cond_resched(); latency_ration = LATENCY_LIMIT; } }

      Here, (when using HDDs), a policy is implemented that places the swapped page in the first available slot. This is supposed to reduce seek time in spinning drives, as it encourages having swap entries near each other.

    3. /* * Even if there's no free clusters available (fragmented), * try to scan a little more quickly with lock held unless we * have scanned too many slots already. */ if (!scanned_many) { unsigned long scan_limit; if (offset < scan_base) scan_limit = scan_base; else scan_limit = si->highest_bit; for (; offset <= scan_limit && --latency_ration > 0; offset++) { if (!si->swap_map[offset]) goto checks; } }

      Here we have a configuration policy where we do another smaller scan as long as we haven't exhausted our latency_ration. Another alternative could be yielding early in anticipation that we aren't going to find a free slot.

    4. /* * select a random position to start with to help wear leveling * SSD */ for_each_possible_cpu(cpu) { per_cpu(*p->cluster_next_cpu, cpu) = get_random_u32_inclusive(1, p->highest_bit); }

      An algorithmic (random) policy is used here to spread swap pages around an SSD to help with wear leveling instead of writing to the same area of an SSD often.

    5. cluster_list_add_tail(&si->discard_clusters, si->cluster_info, idx);

      A policy decision is made to add a cluster to the discard list on a first-come, first-served basis. However, this approach could be enhanced by prioritizing certain clusters higher on the list based on their 'importance.' This 'importance' could be defined by how closely a cluster is related to other pages in the swap. By doing so, the system can reduce seek time as mentioned on line 817.

    6. if (unlikely(--latency_ration < 0)) { cond_resched(); latency_ration = LATENCY_LIMIT; scanned_many = true; } if (swap_offset_available_and_locked(si, offset)) goto checks; } offset = si->lowest_bit; while (offset < scan_base) { if (unlikely(--latency_ration < 0)) { cond_resched(); latency_ration = LATENCY_LIMIT; scanned_many = true; }

      Here, a policy decision is made to fully replenish the latency_ration with the LATENCY_LIMIT and then yield back to the scheduler if we've exhausted it. This makes it so that when scheduled again, we have the full LATENCY_LIMIT to do a scan. Alternative policies could grow/shrink this to find a better heuristic instead of fully replenishing each time.

      Marked config/value as awe're replacing latency_ration with a compiletime-defined limit.

    7. while (scan_swap_map_ssd_cluster_conflict(si, offset)) { /* take a break if we already got some slots */ if (n_ret) goto done; if (!scan_swap_map_try_ssd_cluster(si, &offset, &scan_base)) goto scan;

      Here, a policy decision is made to stop scanning if some slots were already found. Other policy decisions could be made to keep scanning or take into account how long the scan took or how many pages were found.

    8. else if (!cluster_list_empty(&si->discard_clusters)) { /* * we don't have free cluster but have some clusters in * discarding, do discard now and reclaim them, then * reread cluster_next_cpu since we dropped si->lock */ swap_do_scheduled_discard(si); *scan_base = this_cpu_read(*si->cluster_next_cpu); *offset = *scan_base; goto new_cluster;

      This algorithmic policy discards + reclaims pages as-needed whenever there is no free cluster. Other policies could be explored that do this preemptively in order to avoid the cost of doing it here.

    9. #ifdef CONFIG_THP_SWAP

      This is a build-time flag that configures how hugepages are handled when swapped. When defined, it swaps them in one piece, while without it splits them into smaller units and swaps those units.

    10. if (swap_flags & SWAP_FLAG_DISCARD_ONCE) p->flags &= ~SWP_PAGE_DISCARD; else if (swap_flags & SWAP_FLAG_DISCARD_PAGES) p->flags &= ~SWP_AREA_DISCARD

      This is a configuration policy decision where a sysadmin can pass flags to sys_swapon() to control the behavior discards are handled. If DISCARD_ONCE is set, a flag which "discard[s] swap area at swapon-time" is unset, and if DISCARD_PAGES is set, a flag which "discard[s] page-clusters after use" is unset.

    1. randomize_stack_top

      This function uses a configuration policy to enable Address Space Layout Randomization (ASLR) for a specific process if PF_RANDOMIZE flag is set. It randomly arranges the positions of stack of a process to help defend certain attacks by making memory addresses unpredictable.

    1. if (prev_class) { if (can_merge(prev_class, pages_per_zspage, objs_per_zspage)) { pool->size_class[i] = prev_class; continue; } }

      This is an algorithmic policy. A zs_pool maintains zs_pages of different size_class. However, some size_classes share exactly same characteristics, namely pages_per_zspage and objs_per_zspage. Recall the other annotation of mine, it searched free zspages by size_class: zspage = find_get_zspage(class);. Thus, grouping different classes improves memory utilization.

    2. zspage = find_get_zspage(class); if (likely(zspage)) { obj = obj_malloc(pool, zspage, handle); /* Now move the zspage to another fullness group, if required */ fix_fullness_group(class, zspage); record_obj(handle, obj); class_stat_inc(class, ZS_OBJS_INUSE, 1); goto out; }

      This is an algorithmic policy. Instead of immediately allocating new zspages for each memory request, the algorithm first attempts to find and reuse existing partially filled zspages from a given size class by invoking the find_get_zspage(class) function. It also updates the corresponding fullness groups.

    1. 1

      This is a configuration policy that sets the timeout between retries if vmap_pages_range() fails. This could be tunable variable.

    2. if (!(flags & VM_ALLOC)) area->addr = kasan_unpoison_vmalloc(area->addr, requested_size, KASAN_VMALLOC_PROT_NORMAL);

      This is an algorithmic policy. This is an optimization that prevents duplicate marks of accessibility. Only pages allocated without VM_ALLOC (e.g. ioremap) was not set accessible (unpoison), thus requiring explicit setting here.

    3. 100U

      This is an configuration policy that determines 100 pages are the upper limit for the bulk-allocator. However, the implementation of alloc_pages_bulk_array_mempolicy does not explicitly limit in the implementation. So I believe it is an algorithmic policy related to some sort of optimization.

    4. if (!order) {

      This is an algorithmic policy determines that only use the bulk allocator for order-0 pages (non-super pages). Maybe the bulk allocator could be applied to super pages to speed up allocation. Currently I haven't seen the reason why it cannot be applied.

    5. if (likely(count <= VMAP_MAX_ALLOC)) { mem = vb_alloc(size, GFP_KERNEL); if (IS_ERR(mem)) return NULL; addr = (unsigned long)mem; } else { struct vmap_area *va; va = alloc_vmap_area(size, PAGE_SIZE, VMALLOC_START, VMALLOC_END, node, GFP_KERNEL, VMAP_RAM); if (IS_ERR(va)) return NULL; addr = va->va_start; mem = (void *)addr; }

      This is an algorithmic policy that determines whether to use a more efficient vl_alloc which does not involve complex virtual-to-physical mappings. Unlike the latter alloc_vmap_area, vb_alloc does not need to traverse the rb-tree of free vmap areas. It simply find a larger enough block from vmap_block_queue.

    6. VMAP_PURGE_THRESHOLD

      The threshold VMAP_PURGE_THRESHOLD is a configuration policy that could be tuned by machine learning. Setting this threshold lower reduces purging activities while setting it higher reduces framentation.

    7. if (!(force_purge

      This is an algorithmic policy that prevents purging blocks with considerable amount of "usable memory" unless requested with force_purge.

    8. resched_threshold = lazy_max_pages() << 1;

      The assignment of resched_threshold and lines 1776-1777 are configuration policies to determine fewer than which number of lazily-freed pages it should yield CPU temporarily to higher-priority tasks.

    9. log = fls(num_online_cpus());

      This heuristic scales lazy_max_pages logarithmically, which is a configuration policy. Alternatively, machine learning could determine the optimal scaling function—whether linear, logarithmic, square-root, or another approach.

    10. 32UL * 1024 * 1024

      This is a configuration policy that decides to always returns multiples of 32 MB worth of pages. This could be a configurable variable rather than a fixed magic number.

  3. Sep 2024
    1. if (lruvec->file_cost + lruvec->anon_cost > lrusize / 4) { lruvec->file_cost /= 2; lruvec->anon_cost /= 2; }

      It is a configuration policy. The policy here is to adjust the cost of current page. The cost means the overhead for kernel to operate the page if the page is swapped out. Kernel adopts a decay policy. Specifically, if current cost is greater than lrusize/4, its cost will be divided by 2. If kernel has no this policy, after a long-term running, the cost of each page is very high. Kernel might mistakenly reserve pages which are frequently visited long time ago but are inactive currently. It might cause performance degradation. The value (lrusize/4) here has a trade-off between performance and sensitivity. For example, if the value is too small, kernel will frequently adjust the priority of each page, resulting in process's performance degradation. If the value is too large, kernel might be misleaded by historical data, causing wrongly swapping currently popular pages and further performance degradation.

    2. if (megs < 16) page_cluster = 2; else page_cluster = 3;

      It is a configuration policy. The policy here is to determine the size of page cluster. "2" and "3" is determined externally to the kernel. Page cluster is the actual execution unit when swapping. If the machine is small-memory, kernel executes swap operation on 4 (2^2) pages for each time. Otherwise, kernel operats 8 (2^3) for each time. The rational here is to avoid much memory pressure for small-memory system and to improve performance for large-memory system.

    1. mod_memcg_page_state(area->pages[i], MEMCG_VMALLOC, 1);

      It is a configuration policy. It is used to count the physical page for memory control group. The policy here is one physical page corresponds to one count to the memory control group. If using huge page, the value here might not be 1,

    2. schedule_timeout_uninterruptible(1);

      It is a configuration policy. If kernel cannot allocate an enough virtual space with alignment, while nofail is specified to disallow failure, the kernel will let the process to sleep for 1 time slot. In this period, the process cannot be interrupted and quit the schedule queue of CPU. The "1" here is a configuration parameter, made externally to the kernel. It is not large enough for latency-sensitive process and is not small enough to retry frequently.

    3. if (array_size > PAGE_SIZE) { area->pages = __vmalloc_node(array_size, 1, nested_gfp, node, area->caller); } else { area->pages = kmalloc_node(array_size, nested_gfp, node); }

      It is an algorithmic policy and also a configuration policy. The algorithmic policy here is to maxmize the data locality of virtual memory address, by letting the space be in continuous virtual pages. If the demanded size of virtual memory is larger than one page, the kernel will allocate multiple continuous pages for it by using __vmalloc_node(). Otherwise, the kernel will call kmalloc_node() to place it in a single page. On the other hand, to allocate continuous pages, we should invoke __vmalloc_node() by declaring align = 1, which is a configuration policy.

    4. if (!counters) return; if (v->flags & VM_UNINITIALIZED) return;

      It is an algorithmic policy. The show_numa_info() is used for printing how the virtual memory of v (vm_struct) distributes its pages across numa nodes. The two if conditions are used for filtering the invalid v. The first if means the virtual memory has not been associated with physical page. The second if means the virtual memory has not been initialized yet.

    5. if (page) copied = copy_page_to_iter_nofault(page, offset, length, iter); else copied = zero_iter(iter, length);

      It is an algorithmic policy. The policy here is to avoid lock/unlock overhead by using copy_page_to_iter_nofault() function plus if-else branch. Because copy_page_to_iter_nofault() is based on no page fault, we need to check the physical page is still alive. So the kernel uses an if condition to guarantee copy_page_to_iter_nofault() without page fault. On the other hand, if the page is not valid, kernel chooses to fill the iter using zero value instead of returning an error. I think it is because it can make invocation more convenient. The caller does not need to do a verification of the return value and does not need further actions to handle it.

    6. if (base + last_end < vmalloc_start + last_end) goto overflow; /* * Fitting base has not been found. */ if (va == NULL) goto overflow;

      It is an algorithmic policy. The two if conditions are used for checking the feasibility of the allocation. The first if means the adress of the allocation is beyond the allowable address, causing overflow. In this case, since the base address is decided from higher one to smaller one, reducing base address cannot work. Note that the base address is dependent on the va. The second if means we don't find a valid va which has appropriate base address. So the policy here is going to overflow branch to re-allocate va list.

    7. if (base + start < va->va_start) { va = node_to_va(rb_prev(&va->rb_node)); base = pvm_determine_end_from_reverse(&va, align) - end; term_area = area; continue; }

      It is an algorithmic policy. It is used for checking whether the start address of current va is valid. The policy here is to change a new va, different from the solution of "end address check" (explained in my other annotation). The reason is that we find the base address from high address to low address. If we continue to reduce the base address, the if condition will be always wrong. Changing a new va with a lower start address is the only solution.

    8. if (base + end > va->va_end) { base = pvm_determine_end_from_reverse(&va, align) - end; term_area = area; continue; }

      It is an algorithmic policy. The outer loop is an endless loop until finding a vaild virtual memory space satisfying the requirements. The if condition here is used to guarantee the space in va is large enough. If not, we continue to scan the memory space from high address to low address while satisfying the alignment requirement. Specifically, tuning the base address to a lower address. And then, it will retry for verification.

    9. va = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); if (WARN_ON_ONCE(!va)) continue;

      It is an algorithmic policy. The for loop is used for transferring the virtual memory managment by using from vm_struct (link list) to vm_area (tree?). Inside the for loop, if we cannot allocate a new vmap_area from vmap_area_cachep (va = NULL), the program will simply print a warning and skip current tmp, instread of retry or anything to guarantee the integrity.

    10. if (!spin_trylock(&vmap_area_lock)) return false;

      It is an algorithmic policy. The vmalloc_dump_obj() function is used for printing the information of specified virtual memory. Due to concurrency, before manipulating the critical state, we must obtain the lock. The policy here is spin_trylock. It only tries once. If fails, the function will simply returns false. I think the policy here is to avoid long-term waiting to improve the performance.

    1. This revealed a more fundamental issue: programming to increase contraceptive uptake in the postpartum period likely produces little meaningful reduction in pregnancy risk
    1. The law in many countries allows users certain rights over data whose copyright they do not own (including text, images, and other media), often under headings such as fair use or public interest. Depending on jurisdiction, these may cover issues such as whistleblowing, production of evidence in court, quoting or other small-scale usage, backups of owned media, and making a copy of owned material for personal use on other owned devices or systems. The steps implicit in trusted computing have the practical effect of preventing users exercising these legal rights.
    1. during these 200 years the main effects of print on Europe was a wave of Wars of religion and witch hunts and things like that because the big best sellers of early modern Europe were not Copernicus and Galileo galile almost nobody read them the big best sellers were religious tracks and were uh witch hunting manuals

      for - Gutenberg printing press - initial use - misinformation - witchhunting manuals were the best sellers - Yuval Noah Harari

    1. In cases where I've been concerned about the migration of data (e.g. copying my entire home directory from one system to another), I've used fingerprint to generate a transcript on the source machine, and then run it on the destination machine, to reassure me that the data was copied correctly and completely.
    1. When should I use Async? ¶ You should use Async when you desire explicit concurrency in your program. That means you want to run multiple tasks at the same time, and you want to be able to wait for the results of those tasks.
  4. Aug 2024
    1. if (hugetlb_cgroup_disabled())

      This probably not an interesting policy decision for ldos. It is a feature flag for the running OS. But if cgroups were decided by policy then this flag would be controlled by the cgroup decision.

  5. Jul 2024
    1. The purpose of distinguishing between safe and unsafe methods is to allow automated retrieval processes (spiders) and cache performance optimization (pre-fetching) to work without fear of causing harm.
    1. It is now possible (but not easy) for anyone who is determined enough to create a xanadoc, and send it to others, who may open and use it.  (Note that the World Wide Web was available for several years before the Mosaic editor made it easy for the public.)

      fair enough...

    1. However, with verifiable credentials in a Solid Pod, the university issues some information stating that a student completed a course and cryptography signs that information. This is a verifiable credential. They then pass that credential to the student who stores it in their Solid Personal Online Datastore (Pod). When the student wants to apply for a job, all they need to do is grant access to the credential so the company can read it. The company can confirm that the credential isn’t faked because its cryptographically signed by the university.
  6. Jun 2024
    1. Despite – or perhaps because of – all this activity, Samuel only published one sole-authored book in his lifetime, Theatres of Memory (1994), an account of the popular historical imagination in late 20th-century Britain told via case studies, from Laura Ashley fabrics to the touristification of Ironbridge. Since his death from cancer in 1996, however, Samuel has been prolific. A second volume of Theatres of Memory, titled Island Stories: Unravelling Britain, came out in 1998, followed in 2006 by The Lost World of British Communism, a volume of essays combining research and recollections.

      Theatres of Memory (1994) sounds like it's taking lots of examples from a zettelkasten and tying them together.

      It's also interesting to note that he published several books posthumously. Was this accomplished in part due to his zettelkasten notes the way others like Ludwig Wittgenstein?

    1. Technology, Education and Copyright Harmonization (TEACH) Act. Signed into law by President Bush on November 2, the TEACH Act loosens the restraints created by the DMCA insofar as education is concerned

      TEACH Act - loosens DMCA in favor of fair use

    2. More importantly, the idea of Fair Use was effectively removed from Web-based education because of the DMCA

      Fair Use revoked b/c of DMCA

    1. It is not necessary to prevailon each of the four factors for asuccessful fair use claim.

      You don't have to prove fair use in all 4 factors

    2. Fair use is a flexible standard andall four statutory factors areconsidered together.

      Fair use flexibility

    3. The US Constitution clearlystates that the purpose of theintellectual property system is to“promote the progress of science andthe useful arts

      Purpose of Intellectual property system

    4. ir use is a right explicitly recognizedby the Copyright Act.1 The SupremeCourt has recognized this right as a“First Amendment safeguard” becausecopyright law might otherwiseconstrict freedom of speech

      Fair Use is a right - I really don't understand this but Ok

    1. Using the Four Factors

      Four Factors Test for Fair Use

      Read about each factor (character of the use, nature of the work, amount used, effect upon the market)

      Answer each factor's question about your use See how the balance tips with each answer

      Make a judgment about the final balance: overall, does the balance tip in favor of fair use or in favor of getting permission?

    1. are a small subset of the uses of online resources educators may wish to make. It only covers in class performances and displays, not, for example, supplemental online reading, viewing, or listening materials. For those activities, as well as many others, we'll need to continue to rely on fair use. Remember, however, when relying on fair use, the fair use test is sensitive to harm to markets. This means that in general, where there is an established market for permissions, there will often be a narrower scope for fair use, and our reliance on fair use should be limited.
    1. Docker-in-Docker via dind has historically been widely used in CI environments. It means the "inner" containers have a layer of isolation from the host. A single CI runner container supports every pipeline container without polluting the host's Docker daemon.
    2. While it often works, this is fraught with side effects and not the intended use case for dind. It was added to ease the development of Docker itself, not provide end user support for nested Docker installations.
  7. May 2024
    1. Please note that '+' characters are frequently used as part of an email address to indicate a subaddress, as for example in <bill+ietf@example.org>.

      Nice of them to point that this is a common scenario, not just a hypothetical one.

    1. It is not helpful to use the term "terrorism" in a war when the White House only ever applies it to one side. Better to remind both Hamas and the Israeli government that humanitarian law makes it a war crime to target or indiscriminately fire on civilians.

      Kenneth Roth October 7, 2023 tweet

    1. Joe Van Cleave has used an index card with his typewriter to hold up the back of his typewriter paper on machines (the Ten Forty, for example) which lack a metal support on the back of the paper table.

  8. Apr 2024
    1. for - deep geothermal - Quaise Energy - Paul Woskov - adjacency - deep geothermal - gyrotron - microwave energy - drilling - use oil & gas industry drilling expertise - rehabilitate old mines

      summary - see adjacency statement below

      adjacency - between - deep geothermal - gyrotron - microwave energy - drilling - use oil & gas industry drilling expertise - rehabilitate old mines - adjacency statement - gyrotrons pulse high energy microwave energy in nuclear fusion experiments - Woskov thought of applying to vaporing rocks - Quaise was incorporated to explore the possiblity of using gyrotrons to drill up to 20 miles down to tap into the earths heat energy to heat water and drive steam turbines in existing coal-fired and gas power plants - oil and gas industry drilling expertise can be repurposed for this job - as well as all the abandoned resource wells around the globe - Such heat can provide a stable 24/7 base load energy for most of humanity's energy needs.

      implications for energy transition - This is a viable option for replacing the dirty fossil fuel system - It has the scale and engineering timelines to be feasible - It is a supply side change but can affect our demand side strategy - The strategy that may become the most palatable is one of a "temporary energy diet"

    1. In summary, public and private rangeland resources provide a wide variety of EGS. Additionally,spiritual values are vital to the well-being of ranching operations, surrounding communities, and the nation as a whole. Society is placing multiple demands on the nation’s natural resources,and it is extremely important that NRCS be able to provide resource data and technical assistance at local and national levels. (4) Rangelands are in constant jeopardy, either from misuse or conversion to other uses. Holechek et al. (2004) andHolechek (2013) states that in the next 100 years, up to 40percentof U.S. rangelands could be converted and lost to other uses. Land-use shifts from grazing use to urbanization will be much greater in areas of more rapid population increases and associated appreciating land values. Projections supporting forage demand suggestthat changes in land use will decrease the amount of land available for grazing to a greater extent in the Pacific Coast and Rocky Mountains,compared to the North or South Assessment Regions (Mitchell 2000).(5) As society attempts to satisfy multiple demands with limited resources, many ranching and farming operations seek to expand operations for multiple goods and services beyond traditional cattle production. Some diversifiedenterprises may include the following: (i) Management to enhance wildlife abundance and diversityfor fishing, hunting and non-hunting activities(ii) Maintaining habitat for rare plants(iii) Accommodatingnature enthusiasts, bird watchers, and amateur botanists. (6) Planning, evaluation, and communication are necessary steps (consult conservation planning steps) prior to initiating any new rangeland EGS-based enterprises.
    1. if a mail system automatically unsubscribes recipient mailboxes that have been closed or abandoned, there can be no interaction with a user who is not present

      I had not thought of that use case...

    1. In Deutschland ist der Benzinverbrauch 2023 um 416.000 Tonnen auf 17,3 Millionen Tonnen gestiegen. Der ADAC führt das auf den Verkauf von mehr Benzinern zurück. Der Diesel-Verbrauch war wegen weniger Lastverkehr leicht rückläufig. https://taz.de/Kraftstoffverbrauch-bundesweit/!5998899/

    1. For larger collections of books it may be thought preferable to use a libraryclassification, such as Mr. Dewey's Decimal Classification, but I doubt very muchif the gain will be in proportion to the additional labour involved.

      Some interesting shade here, but he's probably right with respect to the additional work involved in a personal collection which isn't shared at scale.

      The real work is the indexing of the material within the books, the assigned numbers are just a means of finding them.

  9. Mar 2024
    1. When you need to publish multiple packages and want to avoid your contributors having to open PRs on many separate repositories whenever they want to make a change.
    2. when projects want to keep strict boundaries within their code and avoid becoming an entangled monolith. This is for example the case for Yarn itself, or many enterprise codebases.
  10. Feb 2024
    1. The lonesome cat isn’t useless.  UUOCs are typically characterized by having exactly one filename argument; this one has none.  It connects the input to the function (which is the input of the if statement) to the output of the if statement (which is the input to the base64 –decode statement)
    1. he very degree of wornness ofcertain cards that you once ipped to daily but now perhaps do not—since that author is drunk and forgotten or that magazine editorhas been red and now makes high-end apple chutneys inBinghamton—constitutes signicant information about what partsof the Rolodex were of importance to you over the years.

      The wear of cards can be an important part of your history with the information you handle.


      Luhmann’s slips show some of this sort of wear as well, though his show it to extreme as he used thinner paper than the standard index card so some of his slips have incredibly worn/ripped/torn tops more than any grime. Many of my own books show that grime layer on the fore-edge in sections which I’ve read and re-read.

      One of my favorite examples of this sort of wear through use occurs in early manuscripts (usually only religious ones) where readers literally kissed off portions of illuminations when venerating the images in their books. Later illuminators included osculation targets to help prevent these problems. (Cross reference: https://www.researchgate.net/publication/370119878_Touching_Parchment_How_Medieval_Users_Rubbed_Handled_and_Kissed_Their_Manuscripts_Volume_1_Officials_and_Their_Books)

      (syndication link: https://boffosocko.com/2024/02/04/55821315/#comment-430267)

    1. Eine Studie der Rhodium Group zeichnet ein Zukunftsbild, das vielen Dekarbonisierungsprognosen - außer bei Stromproduktion und Autos - krass widerspricht. (Es liegt deutlich über den SSP2-4.5-Szenario des IPCC.) 2100 wird danach der Erdgasverbrauch bei 126% des jetzigen liegen. Flugzeuge werden 2050 77% mehr fossilen Treibstoff verbrauchen als jetzt, Schifffahrt und Industrie etwa die heutige Menge. https://www.theguardian.com/environment/2024/feb/09/biggest-fossil-fuel-emissions-shipping-plane-manufacturing

      Rhodium Climate Outlook: https://climateoutlook.rhg.com/

      Rhodium-Artikel zur Prognose: https://rhg.com/research/global-fossil-fuel-demand/

  11. Jan 2024
    1. Instance methods Instances of Models are documents. Documents have many of their own built-in instance methods. We may also define our own custom document instance methods. // define a schema const animalSchema = new Schema({ name: String, type: String }, { // Assign a function to the "methods" object of our animalSchema through schema options. // By following this approach, there is no need to create a separate TS type to define the type of the instance functions. methods: { findSimilarTypes(cb) { return mongoose.model('Animal').find({ type: this.type }, cb); } } }); // Or, assign a function to the "methods" object of our animalSchema animalSchema.methods.findSimilarTypes = function(cb) { return mongoose.model('Animal').find({ type: this.type }, cb); }; Now all of our animal instances have a findSimilarTypes method available to them. const Animal = mongoose.model('Animal', animalSchema); const dog = new Animal({ type: 'dog' }); dog.findSimilarTypes((err, dogs) => { console.log(dogs); // woof }); Overwriting a default mongoose document method may lead to unpredictable results. See this for more details. The example above uses the Schema.methods object directly to save an instance method. You can also use the Schema.method() helper as described here. Do not declare methods using ES6 arrow functions (=>). Arrow functions explicitly prevent binding this, so your method will not have access to the document and the above examples will not work.

      Certainly! Let's break down the provided code snippets:

      1. What is it and why is it used?

      In Mongoose, a schema is a blueprint for defining the structure of documents within a collection. When you define a schema, you can also attach methods to it. These methods become instance methods, meaning they are available on the individual documents (instances) created from that schema.

      Instance methods are useful for encapsulating functionality related to a specific document or model instance. They allow you to define custom behavior that can be executed on a specific document. In the given example, the findSimilarTypes method is added to instances of the Animal model, making it easy to find other animals of the same type.

      2. Syntax:

      Using methods object directly in the schema options:

      javascript const animalSchema = new Schema( { name: String, type: String }, { methods: { findSimilarTypes(cb) { return mongoose.model('Animal').find({ type: this.type }, cb); } } } );

      Using methods object directly in the schema:

      javascript animalSchema.methods.findSimilarTypes = function(cb) { return mongoose.model('Animal').find({ type: this.type }, cb); };

      Using Schema.method() helper:

      javascript animalSchema.method('findSimilarTypes', function(cb) { return mongoose.model('Animal').find({ type: this.type }, cb); });

      3. Explanation in Simple Words with Examples:

      Why it's Used:

      Imagine you have a collection of animals in your database, and you want to find other animals of the same type. Instead of writing the same logic repeatedly, you can define a method that can be called on each animal instance to find similar types. This helps in keeping your code DRY (Don't Repeat Yourself) and makes it easier to maintain.

      Example:

      ```javascript const mongoose = require('mongoose'); const { Schema } = mongoose;

      // Define a schema with a custom instance method const animalSchema = new Schema({ name: String, type: String });

      // Add a custom instance method to find similar types animalSchema.methods.findSimilarTypes = function(cb) { return mongoose.model('Animal').find({ type: this.type }, cb); };

      // Create the Animal model using the schema const Animal = mongoose.model('Animal', animalSchema);

      // Create an instance of Animal const dog = new Animal({ type: 'dog', name: 'Buddy' });

      // Use the custom method to find similar types dog.findSimilarTypes((err, similarAnimals) => { console.log(similarAnimals); }); ```

      In this example, findSimilarTypes is a custom instance method added to the Animal schema. When you create an instance of the Animal model (e.g., a dog), you can then call findSimilarTypes on that instance to find other animals with the same type. The method uses the this.type property, which refers to the type of the current animal instance. This allows you to easily reuse the logic for finding similar types across different instances of the Animal model.

      Certainly! Let's go through each part and explain it in simple terms: ### 1. `this` in Mongoose: - **What is `this`?** In JavaScript, `this` refers to the current context or object. In Mongoose, particularly within methods and middleware functions, `this` represents the instance (document) the function is currently operating on. - **Why is it used?** `this` is used to access and modify the properties of the current document. For example, in a Mongoose method, `this` allows you to refer to the fields of the specific document the method is called on. ### 2. Example: Let's use the `userSchema.pre("save", ...)`, which is a Mongoose middleware, as an example: ```javascript userSchema.pre("save", async function (next) { if (!this.isModified("password")) { next(); } else { this.password = await bcrypt.hash(this.password, 10); next(); } }); ``` - **Explanation in Simple Words:** - Imagine you have a system where users can sign up and set their password. - Before saving a new user to the database, you want to ensure that the password is securely encrypted (hashed) using a library like `bcrypt`. - The `userSchema.pre("save", ...)` is a special function that runs automatically before saving a user to the database. - In this function: - `this.isModified("password")`: Checks if the password field of the current user has been changed. - If the password is not modified, it means the user is not updating their password, so it just moves on to the next operation (saving the user). - If the password is modified, it means a new password is set or the existing one is changed. In this case, it uses `bcrypt.hash` to encrypt (hash) the password before saving it to the database. - The use of `this` here is crucial because it allows you to refer to the specific user document that's being saved. It ensures that the correct password is hashed for the current user being processed. In summary, `this` in Mongoose is a way to refer to the current document or instance, and it's commonly used to access and modify the properties of that document, especially in middleware functions like the one demonstrated here for password encryption before saving to the database.

    Tags

    Annotators

    URL

    1. It's also common to want to compute the transitive closure of these relations, for instance, in listing all the issues that are, transitively, duped to the current one to hunt for information about how to reproduce them.
    2. Purpose of the relations should be to allow advanced analysis, visualization and planning by third-party tools or extensions/plugins, while keeping Gitlab's issues simple and easy to use.
  12. Dec 2023
  13. Oct 2023
    1. When digitized, each resulting ‘Digital Specimen’ must be persistently and unambiguously identified. Subsequent events or transactions associated with the Digital Specimen, such as annotation and/or modification by a scientist must be recorded, stored and also unambiguously identified.

      Workflows

    2. Persistent identifiers (PID) to identify digital representations of physical specimens in natural science collections (i.e., digital specimens) unambiguously and uniquely on the Internet are one of the mechanisms for digitally transforming collections-based science.

      Use case

    3. Appropriate identifiers Requirement: PIDs appropriate to the digital object type being persistently identified.

      Appropriate

  14. Sep 2023
    1. Ultimately the knowledge graph will permit a much clearer understanding of global research networks, research impact, and the ways in which knowledge is created in a highly interconnected world.

      Use Case

    2. PIDs infrastructure promises much more accurate and timely reporting for key metrics including the number of publications produced at an institution in a given year, the total number of grants, and the amount of grant funding received.

      Use Case

    3. They can also much more easily see whether researchers have met mandated obligations for open access publishing and open data sharing.

      Use Case

    4. The global knowledge graph created by the interlinking of PIDs can help funders to much more easily identify the publications, patents, collaborations, and open knowledge resources that are generated through their various granting programs.

      Use Case

    1. I think there are real-world use cases! Would you consider converting a history of transactions into a history of account balances a valid use-case? That can be done easily with a scan. For example, if you have transactions = [100, -200, 200] then you can find the history of account balances with transactions.scan_left(0, &:+) # => [0, 100, -100, 100].
    1. PIDs for research dataPIDs for instrumentsPIDs for academic eventsPIDs for cultural objects and their contextsPIDs for organizations and projectsPIDs for researchers and contributorsPIDs for physical objectsPIDs for open-access publishing services and current research information systems (CRIS)PIDs for softwarePIDs for text publications

      PID Use Case Elements, entities

    1. To reuse and/or reproduce research it is desirable that researchoutput be available with sufficient context and details for bothhumans and machines to be able to interpret the data as described inthe FAIR principles

      Reusability and reproducibility of research output

    2. Registration of research output is necessary to report tofunders like NWO, ZonMW, SIA, etc. for monitoring andevaluation of research (e.g. according to SEP or BKOprotocols). Persistent identifiers can be applied to ease theadministrative burden. This results in better reporting,better information management and in the end betterresearch information.

      Registering and reporting research

    3. 1. Registration and reporting research2. Reusability and reproducibility of research3. Evaluation and recognition of research4. Grant application5. Researcher profiling6. Journal rankings

      PID Use Cases

    1. Deduplication of researchersLinkage with awardsAuthoritative attribution of affiliationand worksORCID iD RecommendedIdentification of datasets, software andother types of research outputsDataCite DOI RecommendedIdentification of organisations GRID/ROR RecommendedIdentification of organisations inNZRISNZBN Required for data providers

      PID Use Cases

    1. Key features● KISTI’s mission is to curate collect, consolidate, and provide scientific information toKorean researchers and institutions. It includes but is not limited to.■ Curating Korean R&D outputs. Curate them higher state of identification for bettercuration, tracking research impact, analysing research outcomes.■ DOI RA management. Issuing DOIs to Korean research outputs, Intellectualproperties, research data■ Support Korean societies to stimulate better visibilities of their journal articlesaround the world.■ Collaborate for better curation (identification and interlinking) with domestic andglobal scientific information management institutions, publishers and identifiermanaging agencies

      PID Use Cases

    1. Name of infrastructure Key purpose List ofintegratedPIDse-infra This large infrastructure will build the NationalRepository Platform in the upcoming years. Thatshould greatly facilitate adoption of PIDs.TBDNational CRIS - IS VaVaI(R&D Information System)National research information system. We planon working with Research, Development andInnovation Council (in charge of IS VaVaI) onintegrating global PIDs into their submissionprocesses as required. Nowadays it uses mostlylocal identifiers.TBDInstitutional CRIS systems Various institutional CRIS systems at CzechRPOs. OBD (Personal Bibliographic Database)application is an outstanding case of aninstitutional CRIS system in the Czech Republicdeveloped locally by a Czech company DERS.An ORCID integration for OBD is currently indevelopment.TBD, OBDORCID inprocessInstitutional or subjectrepositoriesThere are several repositories in the Czechrepublic collecting different objects, some arealready using PIDs but there is still enough roomto improve and really integrate those PIDs, notonly allow their evidence.Handle,DOI,maybeotherMajor research funders Grant application processes TBDLocal publishers Content submission processes TBD

      PID Use Cases

    2. Function PID type Recommended or required?

      PID Use Cases

    1. 1. Registration and reporting research2. Reusability and reproducibility of research3. Evaluation and recognition of research4. Grant application5. Researcher profiling6. Journal rankings

      PID Use Cases in the Netherlands

    1. PIDs comparison tableCase study Function PID typeFinland Researchers, persons ORCID; ISNIOrganisations VAT-number (not resolvableyet)RoRISNI___________________________________________________________________________________________________________________Pathways to National PID Strategies: Guide and Checklist to facilitate uptake and alignment Page 13 of 20

      PID usage by country

    1. Recent work has revealed several new and significant aspects of the dynamics of theory change. First, statistical information, information about the probabilistic contingencies between events, plays a particularly important role in theory-formation both in science and in childhood. In the last fifteen years we’ve discovered the power of early statistical learning.

      The data of the past is congruent with the current psychological trends that face the education system of today. Developmentalists have charted how children construct and revise intuitive theories. In turn, a variety of theories have developed because of the greater use of statistical information that supports probabilistic contingencies that help to better inform us of causal models and their distinctive cognitive functions. These studies investigate the physical, psychological, and social domains. In the case of intuitive psychology, or "theory of mind," developmentalism has traced a progression from an early understanding of emotion and action to an understanding of intentions and simple aspects of perception, to an understanding of knowledge vs. ignorance, and finally to a representational and then an interpretive theory of mind.

      The mechanisms by which life evolved—from chemical beginnings to cognizing human beings—are central to understanding the psychological basis of learning. We are the product of an evolutionary process and it is the mechanisms inherent in this process that offer the most probable explanations to how we think and learn.

      Bada, & Olusegun, S. (2015). Constructivism Learning Theory : A Paradigm for Teaching and Learning.

  15. learn-us-east-1-prod-fleet02-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet02-xythos.content.blackboardcdn.com
    1. ly a few decades before had almost destroyed them.The beauty of these plots is that they present the harshness of the regional envi-ronment in such a way as to make the human struggle against it appear even morepositive and heroic than the continuous ascent portrayed in earlier frontier narra-tives.

      include this in comparison of different narrative fronts

  16. Aug 2023
    1. async vs. sync depends exactly on what you are doing in what context. If this is in a network service, you need async. For a command line utility, sync is the appropriate paradigm in most simple cases, but just knee-jerk saying "async is better" is not correct. My snippet is based on the OP snippet for context.
    1. highlights the dire financial circumstances of the poorest individuals, who resort to high-interest loans as a survival strategy. This phenomenon reflects the interplay between human decision-making and development policy. The decision to take such loans, driven by immediate needs, illustrates how cognitive biases and limited options impact choices. From a policy perspective, addressing this issue requires understanding these behavioral nuances and crafting interventions that provide sustainable alternatives, fostering financial inclusion and breaking the cycle of high-interest debt.

    1. Recently we recommended that OCLC declare OCLC Control Numbers (OCN) as dedicated to the public domain. We wanted to make it clear to the community of users that they could share and use the number for any purpose and without any restrictions. Making that declaration would be consistent with our application of an open license for our own releases of data for re-use and would end the needless elimination of the number from bibliographic datasets that are at the foundation of the library and community interactions. I’m pleased to say that this recommendation got unanimous support and my colleague Richard Wallis spoke about this declaration during his linked data session during the recent IFLA conference. The declaration now appears on the WCRR web page and from the page describing OCNs and their use.

      OCLC Control Numbers are in the public domain

      An updated link for the "page describing OCNs and their use" says:

      The OCLC Control Number is a unique, sequentially assigned number associated with a record in WorldCat. The number is included in a WorldCat record when the record is created. The OCLC Control Number enables successful implementation and use of many OCLC products and services, including WorldCat Discovery and WorldCat Navigator. OCLC encourages the use of the OCLC Control Number in any appropriate library application, where it can be treated as if it is in the public domain.

  17. Jul 2023
  18. Jun 2023
    1. Have you ever: Been disappointed, surprised or hurt by a library etc. that had a bug that could have been fixed with inheritance and few lines of code, but due to private / final methods and classes were forced to wait for an official patch that might never come? I have. Wanted to use a library for a slightly different use case than was imagined by the authors but were unable to do so because of private / final methods and classes? I have.
    2. It often eliminates the only practical solution to unforseen problems or use cases.
  19. May 2023
    1. human values and ethics, rather than solely pursuing technological progress.

      I ask whether technology is classic Pandora's Box--once the attitude is out, you cannot re-box it. Or at least we haven't figured out a way.

      Once this margin has been populated with annotations I want to redo the prompt to include them as an alternative point of view in the dialogue.

    1. Der jährliche Energiebedarf für Bitcoin ist zur Zeit etwa doppelt so hoch wie der österreichische Stromverbrauch. Der Artikel im Standard führt (unkritisch) gleich eine ganze Reihe der manipulativen Argumentationen auf, die von den Crypto Bros verwendet werden, um diese tatsache zu verstecken oder zu rechtfertigen. https://www.derstandard.at/story/2000146147512/der-klimasuender-bitcoin-in-zahlen-und-wie-er-dieses-image

      Cambridge Bitcoin Electricity Consumption Index (CBECI): https://ccaf.io/cbnsi/cbeci

  20. Apr 2023
  21. Mar 2023
    1. Unique Passwords and U2F are not perfect, but they are good. Unique Passwords reduce the impact of phishing, but can’t eliminate it. U2F doesn’t prevent malware, but does prevent phishing.
    1. Figure 4. Daily average time spent with friends. Graphed by Zach Rausch from data in Kannan & Veazie (2023), analyzing the American Time Use Study.2
  22. Feb 2023
    1. Overall, the web page is easy to use. I can easily schedule an appointment, and navigating the tabs is simple. I went ahead and tried the web page on my mobile device as well, and it was just as easy. I also went through to try and book an appointment and that was fairly simple also, although some people may not know which type of appointment is best for them. A brief summary of the different types of treatments would be beneficial.

  23. Jan 2023
    1. There’s a caveat that we’re aware of—while Hydrogen and App developers only require one runtime (Node), Theme developers need two now: Ruby and Node.

      Well, you could write standards-compliant JS... Then people could run it on the runtime everyone already has installed, instead of needing to download Node.

    1. not the technology itself that will bring about the learning or solve pedagogic prob-lems in the language classroom, but rather the affordances of those technologies andtheir use and integration in a well-formulated curriculum
    2. eachers’ digital litera-cies and their preparedness and motivation to introduce technology in their teachingwill largely impact on the extent to which technology-mediated TBLT will be viable asan innovation (Hubbard 2008)
    3. The addition of new technologies to people’s lives is never neutral, as it affects them,their language, and their personal knowledge and relations (Crystal 2008; Jenkinset al. 2009; Walther 2012)
    4. “digital natives”(Prensky 2001)

      A retenir pour usage ultérieur: digital native aka gen Z

    5. Warschauer has long warned, computerand information technology is no magic bullet and can be used to widen as much asto narrow social and educational gaps (Warschauer 2012
    6. particularly newInternet-connected devices and digital technologies have become embedded in thelife and learning processes of many new generations of students (Baron 2004; Ito et al.2009)

      saving this fact because it can be used as a reference in all our works later: so related to our field.

  24. Dec 2022
    1. When I started working on the history of linguistics — which had been totally forgotten; nobody knew about it — I discovered all sorts of things. One of the things I came across was Wilhelm von Humboldt’s very interesting work. One part of it that has since become famous is his statement that language “makes infinite use of finite means.” It’s often thought that we have answered that question with Turing computability and generative grammar, but we haven’t. He was talking about infinite use, not the generative capacity. Yes, we can understand the generation of the expressions that we use, but we don’t understand how we use them. Why do we decide to say this and not something else? In our normal interactions, why do we convey the inner workings of our minds to others in a particular way? Nobody understands that. So, the infinite use of language remains a mystery, as it always has. Humboldt’s aphorism is constantly quoted, but the depth of the problem it formulates is not always recognized.

      !- example : permanent mystery - language - Willhelm von Humboldt phrase "infinite use" - has never been solved - Why do decide to say one thing among infinitely many others?

    1. in this moment Chris and the deer have their owncoagulation: fusing into one “buck” (and obviously Peele was playing uponthis terminology associated with the Black male slave), they jointly chargeand kill their enemy. Together, the “vermin” strike back
    2. Chris makes use of thedead deer; another (more mystical) analysis could posit that the deer takesrevenge on the hunter, using Chris’s body as a vehicle
    3. First, the conflation of the deerwith the devaluation of Black life nods to the long-standing tradition of usinganimals to speak back to the power structures upheld by plantation slaveryin the form of animal folktales. And second, this deer comes roaring back tolife. He gets his revenge on the family that made his noble head into a trophy.The taxidermied deer is a speaking animal that has a kind of second life, andthere are multiple ways we might read its importance in Chris’s escape

      back to life, revenge, trophy, head, speaking animal with second life, the deer also fights back

    4. the deer strikes back
    5. Like the trickster tales discussed above, the films we are lookingat here do not make animals the focal point, but use them as a means of“thinking with” humans.
    6. The trickster is an animal low on the peckingorder (like a rabbit) who finds himself in a jam and must use his wits, charms,and other skill sets to outfox his more powerful enemies. He is an animalsurrogate that speaks softly of strategies for resistance
    7. Wagner emphasizes thatsuch animal tales often provided coded ways of imparting strategies forresistance and that this story has historical connections not only to the tropeof the speaking animal from African trickster mythologies like the spiderAnansi, but also perhaps, to Aesop’s animal fable
    8. educe animals to mere metaphors, similes, or symbolsdo not seem promising for theorizing ethical recognition of actual animals.Donna Haraway, for example, criticizes philosophical texts that show a “pro-found absence of curiosity about or respect for and with actual animals,even as innumerable references to diverse animals are invoked.” 15 SusanMcHugh, meanwhile, suggests that “the aesthetic structures of metaphor,though precariously supporting the human subject, seem unable to bearanimal agency.”16 An
    9. She maybelieve the comparison reveals the moral horror of industrial animal agri-culture, but, as Bénédicte Boisseron argues, such comparisons “instrumen-talize” Blackness in a “self-serving” way, ignoring the complex and ongoingBlack struggle against dehumanizing discourses and institutions in order toframe “the animal” as “the new black.
    10. remind us of the historical reduction of the human to the status of ananimal under transatlantic slavery, but also were used as a mode of resistancefor enslaved peoples

      first half is type 1, first half is type 2

    11. Rather than viewing fables as operating with a purely substitutivelogic, where the animal stands in for the human, recent criticism explores thepossibility that the fable can imagine relationality and even allyship amongspecies
    12. lens of alliance, not solely analogy
    13. Rather, the appearance of animals in some recentfilms highlights the unequal treatment of Black lives in America in a mannerthat continues the fable tradition, and simultaneously emphasizes the human
    14. Theseworks encode various strategies of survival in an era in which Black livescontinue to be devalued
    15. Any resistance must be sanitized soas to be tolerable” for the general audience. 5 But resistance also works not bybeing sanitized, but by being hidden in plain sight, coded as symbols legibleto some but not to all. The use of animal fables has a long-standing historydating back to slavery as providing such a coded language of resistance

      get out use of deer ... chris, black resistance, fables...taxidermy hidden in plain sight, coded/only chris to understand

    16. Wagner notes that theweaker animals use their wits as a means of overcoming the unequal powerdistribution in the world they navigated

      slavery fables weak/vermin intro get out deer...wits and taxidermy

    17. he “speaking animal,” which acknowledgesthe dialectic capacity of the symbolic animal of fables to stage a conversationabout subjugation and resistance, but simultaneously, to point beyond itselfto the reality of animal life.

      speaking animals ... speaking through eyes/perspective

    18. Paraphrasing Joel Chandler Harris, BryanWagner writes that animal stories were “political allegories in which therelative position of the weaker animals corresponded to the global per-spective of the race.”

      thow dean uses deer and how poe uses cat ... not how chris uses deer

  25. Nov 2022
    1. It can be useful to use with keywords argument, which required symbol keys.
    1. Der österreichische Energieverbrauch beträgt zirka 404 TWh (1 Terawattstunde = 1012 Wattstunden) und steigt jährlich um mehr als 2 %. Etwa 25 % davon stammen aus erneuerbaren Energiequellen. Eine grobe Betrachtung zeigt, dass etwa je ein Drittel des Energieverbrauchs auf Verkehr, Industrie und andere Verbraucher entfällt. Ein Anteil von etwa 60 TWh wird in der Form von elektrischer Energie verbraucht. Durch die günstigen natürlichen Gegebenheiten in Österreich können etwa 60 % davon aus Wasserkraft gewonnen werden. Auf Grund des stetig steigenden Verbrauchs und in geringerem Ausmaß wegen der Liberalisierung der Strommärkte erzeugt Österreich seit dem Ende der 90-er Jahre auch bei der Elektrizität nicht mehr selbst seinen Eigenbedarf. Im europäischen Vergleich hat Österreich - vor allem auf Grund der Wasserkraft - einen hohen Anteil an erneuerbarer Energie. In Europa erfolgt die Stromgewinnung zu etwa 85 % aus fossilen Rohstoffen sowie Kernenergie und zu 15 % aus erneuerbaren Quellen. Es ist also derzeit in allen Bereichen eine überwiegende Abhängigkeit von fossilen Energieträgern gegeben.
    1. Once you have collected and summarized the information, it’s time to highlight some key elements. Include index information like the author’s name, book location, or the link URL. For longer Zettels, highlighting the learning objectives or key points in a bullet list might be helpful. The main point is to write your notes in such a way that you will easily be able to quickly get the gist of the material when you come across it again.

      Step 2: Rewrite your notes for the Zettelkasten : after you summrize 1- hightlight the key elements 2-set your index 3-the obective

    2. Step 1: Read and take smart notes When working, write down your thoughts and the reason why you are taking particular note of a piece of information. This way, you will better understand the focus and reasoning behind the information you jot down. Even better, summarize the information and write it in your own words as much as you can.

      writing a smart notes means : write information on your own words -why did think about on that way<br /> - what do you think about it

    3. Permanent notes are stand-alone ideas that can be made without any direct context to other sourced information such as books, videos, or other available data.   Permanent notes can be made as a recap or summary of the information just researched or learned, but can also be thoughts that popped into your brain while thinking over a myriad of information or while analysing any given context. The aim of permanent notes is to process the notes you have made and analyze how they affect your interests, thinking, and research. You then cherry pick the notes that add value to your existing ideas and connect the new information to what you already know and have saved in your database.

      A note you got when an idea pops p n your mind

    4. The technique of grouping information, organizing ideas into categories, and creating tags to help you find grouped information at a later stage is the art of reference notes. When we reference something, it is safe to say that the topic or idea we are writing about is part of a bigger topic or is information accredited to someone or someplace else. We use this technique in various daily circumstances and the function is available on almost every software and app available today. 

      use a hashtag

    5. The Zettelkasten method is good for when you want to: Systematically organize important information Find information again, even years later Develop your own ideas

      it will help you for :

    1. For example, if I make an application (Client) that allows a user (Resource Owner) to make notes and save them as a repo in their GitHub account (Resource Server), then my application will need to access their GitHub data. It's not secure for the user to directly supply their GitHub username and password to my application and grant full access to the entire account. Instead, using OAuth 2.0, they can go through an authorization flow that will grant limited access to some resources based on a scope, and I will never have access to any other data or their password.
    1. There are two situations where an init-like process would be helpful for the container.
    2. highly recommended that the resulting image be just one concern per container; predominantly this means just one process per container, so there is no need for a full init system

      container images: whether to use full init process: implied here: don't need to if only using for single process (which doesn't fork, etc.)

    1. Unless you are the maintainer of lvh.me, you can not be sure it will not disappear or change its RRs for lvh.me. You can use localhost.localdomain instead of localhost, by adding the following lines in your hosts file: 127.0.0.1 localhost localhost.localdomain ::1 localhost localhost.localdomain This is better than using lvh.me because: you may not always have access to a DNS resolver, when developing
  26. Oct 2022
    1. They were adding social structures and context to the data. Basically adding social software design principles to a large volume of data.

      After letting professionals in a company have access to their internal BI data, they made it re-usable for themselves by -adding social structures (iirc indicating past and present people, depts etc., curating it for specific colleagues, forming subgroups around parts of the data) -adding context (iirc linking it to ongoing work, and external developments, adding info on data origin) Thus they started socially filtering the data, with the employee network as social network [[Social netwerk als filter 20060930194648]].

    2. they had given a number of their professionals access to their business intelligence data. Because they were gathering so much data nobody really looked at for lack of good questions to ask of the dataset. The professionals put the data to good use, because they could formulate the right questions.

      Most data in a company is collected for a single purpose (indicators, reporting, marketing). Companies usually don't look at how that data about themselves might be re-used by themselves. Vgl [[Data wat de overheid doet 20141013110101]] where I described this same effect for the public sector (based on work for the Court of Audit, not tying it back to this here. n:: re-use company internal data