Nothing in between. A model that arrives at the correct answer through careful reasoning receives the same reward as one that guesses correctly by chance.
这一段落揭示了当前训练方法的问题:没有区分模型是通过深思熟虑还是偶然猜对答案,导致模型过度自信。
Nothing in between. A model that arrives at the correct answer through careful reasoning receives the same reward as one that guesses correctly by chance.
这一段落揭示了当前训练方法的问题:没有区分模型是通过深思熟虑还是偶然猜对答案,导致模型过度自信。
Some will suggest color coding, but I've never understood it as it limits you to about a dozen topics and it presupposes that you'll be interested in those same topics for decades to the exclusion of others. It wholly lacks flexibility.
I use a card index much like H. Ross Ashby. Start with index cards labeled A-Z, then add topics as you encounter them and add a volume number and page number.
Thus:
C<br /> commonplace books: 1-3, 1-88, 4-67 (see also 'Locke, John')<br /> crickets: 2-45<br /> caviar: 3-22, 3-25 (see also 'eggs')
When you've got a handful of cards for each letter it can be useful to separate things out (a la John Locke) as "CA", "CE", "CI", "CO", "CU" and re-alphabetize to make finding things easier and quicker. At this point it can also be helpful to add tabbed dividers to find the "C" section more quickly. Eventually you may have a single card (or three) with an individual heading for topics you write about frequently. (Naturally you could do a single card for each topic as you start, but it often makes the search process take longer and you'll probably have a lot of lonely, unused cards. It also tends to stifle serendipity and creativity because you're not scanning through your topics as thoroughly or frequently.)
I tend to write index words either in the margins of my commonplace or underline them with a red pencil within the text to make finding things on the page easier upon later search.
You can start small with a recipe card box and eventually move your way up to something more industrial as you need it. There are also lots of options in between.
Indexing can be an art and was also a great science (before Google made everyone lazy), so there are some useful handbooks on the topic below:
Other related ideas: https://boffosocko.com/research/zettelkasten-commonplace-books-and-note-taking-collection
reply to u/commonbankpen at https://reddit.com/r/commonplacebook/comments/1syayru/how_do_you_index/
WeirdML V2 places models in an unusually resource-constrained environment: models get only five attempts to submit working code, with no access to external tools. This setup has not been the focus of recent RL training.
大多数人可能认为所有AI评估指标都会反映相同的进步趋势,但研究发现WeirdML V2指标没有显示加速,因为它设置了资源限制环境,而近期强化学习训练并未关注此类设置。这表明AI进步可能受评估方法的影响。
SWE-chat is a living dataset; our collection pipeline automatically and continually discovers and processes sessions from public repositories
大多数人认为AI研究数据集是静态的、一次性的收集,但作者提出'活数据集'概念,强调数据需要持续更新才能反映真实使用情况。这挑战了传统AI评估中依赖静态基准测试的做法,主张需要动态、持续的数据收集方法。
Luna conducted roughly 20 interviews on Google Meet with the camera off. Hired 2 full-time employees after 5-15 minute calls, and rejected CS and physics students for lacking retail experience.
AI招聘方式颠覆了传统人力资源实践,不露面、简短面试却能做出有效雇佣决策,且能识别特定行业经验的价值,这暗示AI可能在某些领域比人类更高效地评估候选人。
Add screenshot-based LLM judge evaluator, screenshot collector, and --parallelize flag
引入基于截图的LLM评估器和并行化功能是一个令人惊讶的创新。通过截图评估AI模型的性能,可以更直观地理解自动化过程中的视觉理解能力,而并行化功能则大大提高了基准测试的效率,这代表了AI系统评估方法的重要进步。
We develop reliable a posteriori error estimators for fully discrete Runge–Kutta discontinuous Galerkin approximations of nonlinear convection–diffusion systems endowed with a convex entropy in multip
令人惊讶的是,本文的核心挑战不是「计算精度」,而是「知道自己有多不精确」。a posteriori 误差估计器的作用是:在不知道真实解的情况下,对数值解的误差给出可靠的上界。这类似于在没有标准答案的考试中,能自动评估自己答错了多少——这在数值计算中是极高层次的自知能力,也是自适应网格细化的理论基础。
using SemanticCommit, we recorded all instances of edits, check for conflicts, make change, local, and global resolution actions using telemetry.
sentences describing methods the authors used; one sentence at a time
A sub-task was considered a failure if the participant was unable to complete it within the time limit.
sentences describing methods the authors used; one sentence at a time
Finally, we conducted an informal interview about their experience.
sentences describing methods the authors used; one sentence at a time
After both tasks were completed, participants filled out a final survey to compare the two conditions.
sentences describing methods the authors used; one sentence at a time
We chose GPT-4o for performance and latency reasons, as it performed optimally against our evals.
sentences describing methods the authors used; one sentence at a time
We also ran evaluations of model latency and classification performance under varying false positive rates for the following LLMs by OpenAI: GPT-4o, GPT-4o-mini, and o3-mini.
sentences describing methods the authors used; one sentence at a time
For each task, participants were tasked with integrating three new pieces of information into the memory, one at a time ("sub-tasks").
sentences describing methods the authors used; one sentence at a time
We ensured each list was 30 items long as our pilot studies suggested this was long enough that manual detection starts to become unwieldy (users need to scroll up and down the document), but short enough that participants could become familiar in a short period.
sentences describing methods the authors used; one sentence at a time
We adapted two intent specifications from our evals: Mars Game Design Document and Financial Advice AI Agent Memory, as these tasks mapped to the two paradigmatic types covered in Sections 2 and 2.1 (design documents, and AI memory of the user).
sentences describing methods the authors used; one sentence at a time
We recruited 12 participants (7 female, 5 male) through the mailing lists of two research universities and one multinational technology company.
sentences describing methods the authors used; one sentence at a time
We chose OpenAI's ChatGPT Canvas as a baseline for five reasons: (i) it is a popular, commercially available tool, hence it is likely familiar to users; (ii) it provides a document editing view, where users can select text and ask GPT to rewrite it, or chat with an AI to make global edits; (iii) it employs a similar class of model (GPT-4o); (iv) it supports similar editing features as SemanticCommit like inline text selection, conflict highlighting, and a diff view, while adding free-form editing; and (v) similar interfaces like Anthropic Artifacts tended to rewrite the specification entirely, and did not offer Canvas's "diff" view to allow for a fair comparison.
sentences describing methods the authors used; one sentence at a time
With participant consent, we recorded audio and screen-casts, and participants were encouraged to think aloud.
sentences describing methods the authors used; one sentence at a time
Four coauthors created the evals, and two coauthors manually double-checked all conflicts, a process that took several days.
sentences describing methods the authors used; one sentence at a time
We ran one pilot study with five users of our card-based interface, and a second with four users of a revised interface.
sentences describing methods the authors used; one sentence at a time
Our explorations went through substantial iterations and prompt prototyping over a period of eight months, evolving in response to two pilot studies and progressing from a card-based interface to a list of texts.
sentences describing methods the authors used; one sentence at a time
We iterated on prompts using ChainForge [5] by setting up an evaluation pipeline against our datasets, which allowed us to observe the effects of prompt changes and model choices.
sentences describing methods the authors used; one sentence at a time
To measure statistical significance, we used Mann–Whitney–Wilcoxon tests and report the p-values.
sentences describing methods the authors used; one sentence at a time
For qualitative analysis, the first author performed open coding on participant responses and audio transcripts to identify themes, which were used to interpret the qualitative results.
sentences describing methods the authors used; one sentence at a time
In the post-task surveys, we collected self-reported NASA Task Load Index (TLX) scores, Likert-scale ratings for ease of use, and responses on how well the AI helped participants identify, understand, and resolve semantic conflicts.
sentences describing methods the authors used; one sentence at a time
Each condition had a time limit of 15 minutes, after which the participant completed a post-task survey.
sentences describing methods the authors used; one sentence at a time
Before each task, participants received a tutorial on the assigned tool and were given five minutes to explore it using a test document.
sentences describing methods the authors used; one sentence at a time
Both the order of task assignment and tool assignment were counterbalanced and randomly assigned.
sentences describing methods the authors used; one sentence at a time
We conducted a controlled within-subjects study with mixed methods, comparing SemanticCommit with a baseline interface.
sentences describing methods the authors used; one sentence at a time
We run end-to-end on our four eval datasets using GPT-4o and GPT-4o-mini and report the mean ± stddev for accuracy, precision, recall, and F1 scores for the three approaches in Figure 5.
sentences describing methods the authors used; one sentence at a time
We compare our end-to-end system against two simpler methods: (i) DropAllDocs, which adds all documents to the context for conflict classification; and (ii) InkSync [56] which generates a JSON list of string-replace operations.
sentences describing methods the authors used; one sentence at a time
In order to minimize relevance assessment issues, we apply a PageRank-based relevance ranking over the KG, akin to HippoRAG [36].
sentences describing methods the authors used; one sentence at a time
We implement the back-end using a knowledge-graph (KG) RAG architecture [36] consisting of two phases: pre-processing and inference.
sentences describing methods the authors used; one sentence at a time
Through a within-subjects study with 12 participants comparing SemanticCommit to a chat-with-document baseline (OpenAI Canvas), we find differences in workflow: half of our participants adopted a workflow of impact analysis when using SemanticCommit, where they would first flag conflicts without AI revisions then resolve conflicts locally, despite having access to a global revision feature.
sentences describing methods the authors used; one sentence at a time
We implement the back-end using a knowledge-graph (KG) RAG architecture [36] consisting of two phases: pre-processing and inference.
sentences describing methods the authors used; one sentence at a time
In order to minimize relevance assessment issues, we apply a PageRank-based relevance ranking over the KG, akin to HippoRAG [36].
sentences describing methods the authors used; one sentence at a time
We compare our end-to-end system against two simpler methods: (i) DropAllDocs, which adds all documents to the context for conflict classification; and (ii) InkSync [56] which generates a JSON list of string-replace operations.
sentences describing methods the authors used; one sentence at a time
We run end-to-end on our four eval datasets using GPT-4o and GPT-4o-mini and report the mean ± stddev for accuracy, precision, recall, and F1 scores for the three approaches in Figure 5.
sentences describing methods the authors used; one sentence at a time
We conducted a controlled within-subjects study with mixed methods, comparing SemanticCommit with a baseline interface.
sentences describing methods the authors used; one sentence at a time
Both the order of task assignment and tool assignment were counterbalanced and randomly assigned.
sentences describing methods the authors used; one sentence at a time
Before each task, participants received a tutorial on the assigned tool and were given five minutes to explore it using a test document.
sentences describing methods the authors used; one sentence at a time
Each condition had a time limit of 15 minutes, after which the participant completed a post-task survey.
sentences describing methods the authors used; one sentence at a time
In the post-task surveys, we collected self-reported NASA Task Load Index (TLX) scores, Likert-scale ratings for ease of use, and responses on how well the AI helped participants identify, understand, and resolve semantic conflicts.
sentences describing methods the authors used; one sentence at a time
For qualitative analysis, the first author performed open coding on participant responses and audio transcripts to identify themes, which were used to interpret the qualitative results.
sentences describing methods the authors used; one sentence at a time
To measure statistical significance, we used Mann–Whitney–Wilcoxon tests and report the p-values.
sentences describing methods the authors used; one sentence at a time
We iterated on prompts using ChainForge [5] by setting up an evaluation pipeline against our datasets, which allowed us to observe the effects of prompt changes and model choices.
sentences describing methods the authors used; one sentence at a time
Our explorations went through substantial iterations and prompt prototyping over a period of eight months, evolving in response to two pilot studies and progressing from a card-based interface to a list of texts.
sentences describing methods the authors used; one sentence at a time
We ran one pilot study with five users of our card-based interface, and a second with four users of a revised interface.
sentences describing methods the authors used; one sentence at a time
Four coauthors created the evals, and two coauthors manually double-checked all conflicts, a process that took several days.
sentences describing methods the authors used; one sentence at a time
With participant consent, we recorded audio and screen-casts, and participants were encouraged to think aloud.
sentences describing methods the authors used; one sentence at a time
We process this data in a three-stage pipeline (Figure 6). In the first stage, Sentence Segmentation and Categorization, abstracts are split into individual sentences using the NLTK package, and each sentence is classified into one of the five pre-defined aspects as listed in Section 4.1.1. Classification is performed by prompting an LLM (see prompt used in Appendix D.1) with the sentence and its full abstract.
sentence relating to methodology
Then, we segment sentences within each aspect into grammar-preserving chunks (see prompt used in Appendix D.2). This results in grammatically coherent chunks that are the basis of structure patterns. After identifying chunk boundaries, we again prompt an LLM to generate labels for chunks in a human-in-the-loop approach: starting from an initial set of labels for chunk roles, when a new label is generated, a researcher from the research team examines the new label and merges it with existing labels if appropriate, controlling for the total number of labels.
sentence relating to methodology
In this study, we allowed participants to experience views of same-aspect sentences (Section 4.1.1) with different combinations of highlighting, ordering, and alignment (as described in Section 4.1.2 and Section 4.1.4) enabled or not, in order to understand which and/or what combinations most effectively supported users' ability to skim and read laterally across documents.
sentence relating to methodology
Inspired by GP-TSM [24], AbstractExplorer first segments sentences into grammar-preserving chunks—segments that respect grammatical boundaries, i.e., an LLM judges that the sentence can be truncated at that chunk boundary without breaking the grammatical integrity of the preceding text. Each chunk is then classified by an LLM as having one of nine pre-defined roles, each of which has its own assigned color.
sentence relating to methodology
AbstractExplorer classifies sentences into five pre-defined aspects common in CHI abstracts: Problem Domain, Gaps in Prior Work, Methodology/Contribution, Results/Findings, and Discussion/Conclusion.
sentence relating to methodology
In the context of close reading of research paper abstracts at scale, our findings suggest AbstractExplorer enabled participants to scale up the number of papers they could review through efficient skimming and find common patterns and outliers through sentence comparison, resulting in a rich synthesis of ideas and connections to foster deeper engagement with scholarly articles.
sentence relating to methodology
We extend existing approaches through automated role annotation, establishing alignments using grammatical chunk boundaries, and preserving sentences in their entirety, instead of relying on abstract meta-data.
sentence relating to methodology
We demonstrate how slicing sentences according to roles and visually aligning them can help readers perceive cross-document relationships in a coherent manner.
sentence relating to methodology
In this work, we introduce a new paradigm for exploring a large corpus of small documents by identifying roles at the phrasal and sentence levels, then slice on, reify, group, and/or align the text itself on those roles, with sentences left intact.
sentence relating to methodology
The front-end React app allows users to view partially loaded custom aspects while they are being generated.
sentence relating to methodology
Custom aspects are generated dynamically via API calls to a FastAPI back-end, which prompts an LLM to check whether each sentence in the filtered subset matches the aspect description—either in terms of overall content or a matching token—and extracts the most relevant chunk of that sentence to highlight.
sentence relating to methodology
All the chunks and corresponding labels are pre-computed and stored as JSON files, ensuring responsiveness and low latency of the web application.
sentence relating to methodology
After obtaining an expanded set of high-level chunk labels, we assign them to each of the sentence chunks by using LLMs in a multi-class classification few-shot learning task, with the initial labels and assignment as examples.
sentence relating to methodology
After identifying chunk boundaries, we again prompt an LLM to generate labels for chunks in a human-in-the-loop approach: starting from an initial set of labels for chunk roles, when a new label is generated, a researcher from the research team examines the new label and merges it with existing labels if appropriate, controlling for the total number of labels.
sentence relating to methodology
Then, we segment sentences within each aspect into grammar-preserving chunks.
sentence relating to methodology
Classification is performed by prompting an LLM with the sentence and its full abstract.
sentence relating to methodology
In the first stage, Sentence Segmentation and Categorization, abstracts are split into individual sentences using the NLTK package, and each sentence is classified into one of the five pre-defined aspects as listed in Section 4.1.1.
sentence relating to methodology
We process this data in a three-stage pipeline (Figure 6).
sentence relating to methodology
From this pane, users can toggle and view highlighted sentence chunks, click to scroll to the relevant abstract for each sentence, or remove bookmarks from the list.
sentence relating to methodology
When users click on a bookmark icon to the left of any specific sentence in the Cross-Sentences Relationships Pane, that sentence is added to a bookmark list that can be viewed in the Bookmarked Sentences alternate pane.
sentence relating to methodology
Our user studies on 24 participants demonstrate the usability of our proposed virtu.
sentence relating to methology
Sentence bookmarking helps users keep track of papers to revisit later.
sentence relating to methodology
To address this, we conducted a systematic review and meta-analysis of 56 papers p.
sentence relating to methology
Users can enter a search term into the search bar and only papers that include that exact term will appear in the Cross-Sentence Relationships pane.
sentence relating to methodology
Through two user studies, we found that with an accompanying size-ch.
sentence relating to methology
Filtering enables users to narrow their focus to a subset of the corpus while still benefiting from features that help them recognize cross-sentence relationships within the remaining abstracts.
sentence relating to methodology
Through a VR experiment (N=31) and a following online survey (N=142).
sentence relating to methology
The Abstracts panel can be customized by users to display the full abstract text, an abstract “TLDR” (a shorter abstractive summary generated by an LLM), or both at the same time.
sentence relating to methodology
To allow users to contextualize individual sentences within their respective abstracts, we link the Cross-Sentence Relationship and Abstract panels: when users click on any sentence in the Cross-Sentence Relationships pane, the corresponding full abstract is automatically highlighted and scrolled into view in the Abstracts panel, offering additional context when needed.
sentence relating to methodology
methods, we conducted a controlled experiment (N=24) to assess the user performance, experience and VR sickness.
sentence relating to methology
Together, the vertical and horizontal juxtapositions are designed to help users identify both high-level commonalities and nuanced variations across structurally similar sentences.
sentence relating to methodology
ryday life, we ran three co-design workshops (N=69).
sentence relating to methology
If users prefer, sentences can be toggled to a left-justified view to facilitate conventional reading and skimming.
sentence relating to methodology
In our mixed-method study with 21 participants, we evaluated the usability and effectiveness.
sentence relating to methology
This may effectively decrease context switching and lead to more robust mental models without requiring more cognitive load.
sentence relating to methodology
These alignment options are intended to enable users to more easily read analogous chunks across sentences from different abstracts, ignoring details serving other roles within the sentence.
sentence relating to methodology
ucted on eight-month longitudinal user study with 12 older adults aged 60 years and above to demonstrate the eff.
sentence relating to methology
By default, sentences are vertically aligned by the middle of their shared structure tuple, but users can freely switch between the three alignment options using the button group atop the Cross-Sentence Relationship pane.
sentence relating to methodology
In a study (N=18), we compared our multimodal technique with pseudo-haptics al.
sentence relating to methology
AbstractExplorer also aligns the sentences in three different ways, as illustrated in Figure 5: vertical alignment by the middle of the structure tuple (second element), vertical alignment by the left of the structure tuple (first element), and left-justified alignment (horizontal juxtapositions).
sentence relating to methodology
Through a study with 30 participants, we show that the system enables users to perc.
sentence relating to methology
This ordering prioritizes dominant structural patterns (largest groups first) while exposing fine-grained variations (via length-sorted triplets), mirroring how humans compare sentences, if SMT is an accurate description in this domain of comparative close reading.
sentence relating to methodology
ploratory and a confirmatory factor analysis with a total of 159 participants.
sentence relating to methology
Within each of those groups, sentences are arranged by increasing length.
sentence relating to methodology
We conducted an experiment with 24 participants using a real vehicle along a route with known.
sentence relating to methology
Sentences are recursively grouped by sequences of three chunk roles, with groups ordered by decreasing size.
sentence relating to methodology
We conducted a study involving 32 participants organized in teams of two.
sentence relating to methology
AbstractExplorer orders sentences within each structure group based on the sequential pattern of chunk roles (vertical juxtapositions).
sentence relating to methodology
In a within-subjects study (20 participant pairs), we examined the effects of vibrotactile hap.
sentence relating to methology
This allows users to first understand the different structure patterns and their commonality, before diving into close reading at scale of the sentences that share a particular structure by clicking any of the “Expand” toggles.
sentence relating to methodology
Through task-based experiments with 25 pairs of participants (50 individuals), we evaluated quanti.
sentence relating to methology
An example sentence is also shown alongside each structure.
sentence relating to methodology
The total number of sentences with each structure in the (possibly filtered) corpus is shown in parentheses and visualized as a histogram.
sentence relating to methodology
In AbstractExplorer, sentences are grouped by this definition of sentence structure.
sentence relating to methodology
These role-based color highlights enable quick identification of analogous chunks and visual pattern matching over sequences of chunks across sentences.
sentence relating to methodology
Each role label has a corresponding unique color shown at the top of the “Cross-Sentence Relationship” panel.
sentence relating to methodology
We adopt the color palette of Tableau 10, which was carefully designed to be clearly distinguishable.
sentence relating to methodology
Each chunk is then classified by an LLM as having one of nine pre-defined roles, each of which has its own assigned color.
sentence relating to methodology
AbstractExplorer first segments sentences into grammar-preserving chunks—segments that respect grammatical boundaries, i.e., an LLM judges that the sentence can be truncated at that chunk boundary without breaking the grammatical integrity of the preceding text.
sentence relating to methodology
We conducted five user studies with 98 participants.
sentence relating to methology
Viewing one aspect at a time enables users to closely read and compare just the analogous sentences of abstracts, which may be cognitively easier than the comparative close reading of many abstracts in their entirety, especially if cross-sentence relationships are pre-computed and reified in the interface.
sentence relating to methodology
AbstractExplorer classifies sentences into five pre-defined aspects common in CHI abstracts: Problem Domain, Gaps in Prior Work, Methodology/Contribution, Results/Findings, and Discussion/Conclusion.
sentence relating to methodology
We chose the sentence as our unit for cross-document alignment because: (1) it preserves complete propositional content (unlike phrases or words), (2) maintains grammatical coherence when isolated (unlike arbitrary text spans), and (3) serves as the minimal self-contained unit where aspects can be meaningfully compared.
sentence relating to methodology
To keep details at the forefront of the interface, we designed a mechanism to slice abstracts for viewing them from specific angles, allowing for comparative close reading at scale at the sentence level.
sentence relating to methodology
To do this, ABSTRACTEXPLORER is designed to support close-reading purpose-defined slices through a large collection-1,057 in our studies-of paper abstracts.
sentence relating to methodology
ABSTRACTEXPLORER is designed to help researchers (1) skim, read, and better familiarize themselves with the contents and composition style of a large corpus of abstracts and (2) reason about cross-paper relationships at scale without abstracting away the author-written sentences about their own work.
sentence relating to methodology
Finally, a summative study (Section 6) describes how researchers used ABSTRACTEXPLORER to familiarize themselves with a corpus of ~1000 CHI paper abstracts—reading across a larger and more diverse collection of abstracts and more easily discerning relationships and distributions across prior work.
sentence relating to methodology
Second, an ablation study with eye-tracking (Section 5) revealed that the three key features of ABSTRACTEXPLORER's central cross-sentence relationships pane-sentence order, role-coordinated highlighting, and alignment-work best in concert, not alone.
sentence relating to methodology
Three studies inform and validate ABSTRACT EXPLORER's design: First, a formative study (Section 3) suggested unmet needs and interest in our approach to supporting cross-document reasoning.
sentence relating to methodology
We demonstrate these features’ utility in the context of helping researchers’ skim and read closely and laterally across a corpus of scientific abstracts.
sentence relating to methodology
AbstractExplorer instantiates new minimally lossy SMT-informed techniques for skimming, reading, and reasoning about a corpus of similarly structured short documents: phrase-level role classification that drives sentence ordering, highlighting, and spatial alignment.
sentence relating to methodology
A summative study (N=16) describes how these features support users in familiarizing themselves with a corpus of paper abstracts from a single large conference with over 1000 papers.
sentence relating to methodology
An ablation study (N=24) validated that these features work best together.
sentence relating to methodology
AbstractExplorer has a unique combination of LLM-powered (1) faceted comparative close reading with (2) role highlighting enhanced by (3) structure-based ordering and (4) alignment.
sentence relating to methodology
We compose and evaluate a system, called AbstractExplorer, with analogous SMT-derived characteristics for the domain of scientific abstract corpus familiarization.
sentence relating to methodology
It seems that the microscopy imaging section has been omitted from the Methods section. You can see several images in the manuscript but no information on how they were acquired.
to the waysthat documents were organized in archives,
Find archaeological papers which described how Mesopotamians in the ANE organized their documents.
Mention via @Podany2013
Such actions include, for example,enhancing microbial activity by adding appropriateelectron donors or acceptors to the system (29), orintroducing abiotic reactants into contaminatedgroundwaters such as zero-valent metals in per-meable reactive barriers (30)
It will be good to look back on this for certain ideas on the types of treatments
“NFORMATION RETRIEVAL” 1961 IBM BUSINESS COMPUTER PROMO MAINFRAME PUNCHCARD COMPUTERS SM10435<br /> by [[Periscope Film]] on YouTube <br /> accessed on 2026-01-04T15:56:12
Some great visuals hiding in here.<br /> Starts out with details for properly threading film projector<br /> keywords - indexing methods<br /> Key Word in Context (KWIC)<br /> inverted file (aka lookup file)<br /> Notice this is a few years after Desk Set (1957)<br /> Selective dissemination of information<br /> Fake company name: Alamer
" KEYS TO THE LIBRARY " 1950s EDUCATIONAL FILM CARD CATALOGS, ENCYCLOPEDIAS & DEWEY DECIMAL XD71644<br /> by [[Periscope Film]] on YouTube<br /> accessed on 2026-01-04T15:53:51
But I am sure that hierarchies can get in the way, especially at the beginning.
Wanting to come up with and use knowledge hierarchies seems almost endemic to the beginning note taker. Kuehn's observation here closely matches my experience in watching newcomers to the note taking/zettelkasten space on Reddit.com and other fora over the past 8 years or so.
a hybrid system, as I kept these notes in close connection with Ecco Pro, which I used to keep an index of the notebook material.
Manfred Kuehn used Ecco Pro to index his notebooks in a hybrid analog/digital system.
Rugg, Gordon and Petre, Marian. 2020. The Unwritten Rules of PhD Research. 3rd ed. Open University Press.
Only three mentions of index cards here...
Put ideas on index cards – one to a card – and then arrange them in differ-ent structures. Again, you can do this in a series of passes, using a differentcriterion each time; this will help you to identify core concepts, structuresand outliers.
It's almost as if they're suggesting putting ideas onto index cards after-the-fact rather than from the start as older manuals would have suggested. This would seem to add a huge amount of work to the process.
Finding the notes This will become problematic with larger volumes. For the most part, two tools suffice for me: 1) an alphabetical index; 2) notes on the bibliographical slips, in case the problem arises from the name.
What belongs together changes anyway depending on the question being asked and cannot be predetermined schematically. No straitjacket, but rather the principle of arbitrariness
The cross-referencing technique solves all organizational problems. Misplacements must be corrected by cross-referencing, not by rearranging.
This is particularly true when other cross references on paper can't easily be found and fixed the way they might be in digital form. Creating a pointer to the correct location is the quickest and most efficient method for fixing a mis-filing on paper.
Each note must have a fixed location that is never changed, as finding it depends on this. Remove it when needed and replace it exactly where it was placed This requires numbering the slips of paper. There will be long numbers, therefore alternating between numbers and letters for quicker recognition: 533/15 d 17 a 1
FILING PROCEDURES IN BUSINESS 1965 OFFICE MANAGEMENT / SECRETARY TRAINING FILM 62244<br /> by [[Periscope Film]] on YouTube<br /> accessed on 2025-12-03T00:14:39
THE SCIENCE OF THE FILING ENGINEERThe Simplex Alpabetic Method Is Considered the Most Efficient and Takes Care ofTAverage Requirements - It May Be the 95% File-Complex Methods Also Explained
Butters, Roland W. 1921. “The Science of the Filing Engineer.” Filing & Office Management 6(7): 193–94. https://www.google.com/books/edition/Filing_Office_Management/o1rnAAAAMAAJ?hl=en&gbpv=1&pg=PA193&printsec=frontcover&dq=duplex.
The next Complex method in order, is the Numeric,which may be divided into three classes, straight num-eric, duplex and decimal. It is safe to say that withthe straight Alphabetic or Geographic, ninety-five per-cent of the cases where an Index is used will be moreefficiently handled by the use of either one of theseMethods, than by the Numeric. However, there aresome cases where there is a great deal of cross refer-ence, thus making the use of the Numeric methodmore advantageous.
This is likely the reason why most commonplacers using index card systems use alphabetic set ups by subject rather than Niklas Luhmann's duplex numeric variation.
McCord, James Newton. 1920. A Textbook of Filing. D. Appleton. https://www.google.com/books/edition/A_Textbook_of_Filing/SBowAAAAYAAJ?hl=en&gbpv=0.
Classification of Files of the U. S. ExplosivePlant at Nitro, W. Va.By Miss J. L. Dillard
Dillard, J. L. 1919. “Subject and Classification Filing: Classification of Files in the U.S. Explosive Plant at Nitro, W. Va.” Filing: A Magazine on Indexing & Filing 3(2): 401–3. https://www.google.com/books/edition/Filing/nxFLAAAAYAAJ?hl=en&gbpv=1&pg=PA401&printsec=frontcover&dq=duplex (December 2, 2025).
UBJECT filing is a branch of filingwhich is not much used in thebusiness world but which deservesas much attention as any other. It ismore complicated than other systemsand an adaptation has to be workedout for each , business to which it isapplied, but after the classification hasonce been made, it will prove to be foreconomy and increased efficiency.
Lennig, Margaret Antoinette. 1920. Filing Methods: A Text Book on the Filing & Indexing of Commercial & Government Records. https://www.google.com/books/edition/Filing_Methods/vVagv_GyENwC?hl=en&gbpv=0.
Hudders, Eugene Russell. 1919. Indexing and Filing: A Manual of Standard Practice. 5th ed. New York: The Ronald Press Company. https://www.google.com/books/edition/Indexing_and_Filing/p_MRAAAAYAAJ?hl=en&gbpv=0.
Hunter, Estelle Belle. 1923. Modern Filing Manual. Rochester, NY: Yawman and Erbe Manufacturing Company. https://www.google.com/books/edition/Modern_Filing_Manual/F-lNAQAAMAAJ?hl=en.
ll- cause mortality 0.77(0.73 to 0.80)
DRF for the HIA
The participants were the English as a foreign language (EFL) teachers and students of Class VIII from astate-run school of Paschim Medinipur district, West Bengal, India. The school was chosen randomly bythe researchers. The data was collected from December 2019 to March 2020. The researchers visited theinstitution thrice in a week during the abovementioned period. Three teachers and sixty students fromSection A and B participated in this study.
Methods (qualitative case study): One state-run school in Paschim Medinipur, India; 3 EFL teachers + 60 Class VIII students; interviews + classroom observations (pp. 6–7). Why it matters: Establishes credibility/CRAAP (scope, site, instruments).
Elevational declines occur in both directions from the continental divide that separates Banff and Yoho National Parks and these lower elevation areas contain higher densities of many species, especially ungulates in the east end of Banff
LOCATION; Part of environmental challenge
Banff and Yoho National Parks
LOCATION; relevant for summary
Semistructured hour-longinterviews addressed interpretations of illness, self-carepractices, and use of and access to health care. Inter-views were tape-recorded and transcribed verbatim.Respondents were interviewed in their language ofchoice, either Tagalog or English. With the exceptionof 2 respondents who were interviewed in English,respondents were interviewed in Tagalog by a fluentTagalog speaker.
method - narrative, overwhelming tagalog first
Over six thousand leaves (which is tosay, thirteen thousand pages) survive, and experts estimate that thisrepresents about a quarter of the original total. This implies that Leonardofilled his notebooks at the rate of about a thousand pages a year, allobsessively covered with drawings, diagrams and idiosyncratic mirrorhandwriting. ‘I worked out at one point that he must have written aboutfifty academic-length books, if you put them all together,’ says Kemp. ‘Hewas never at rest.’
🌲When Themed Logs are More Useful than Daily Notes by [[Eleanor Konik]]
Konik seems to have realized that filing things topically can be a valuable practice as if this wasn't the mode of the day for centuries now. How did the daily note become such a thing that this was lost? Is it the focus on notebook-based bullet journals? Programmers creating daily notes?
GENERAL METHODS
Report about study participants and details about visual stimuli, rivalry task, and perceptual selection measure implemented in the study. "Catch trials" were used to control for response bias, and eye-dominance was measured so that participants with >85% dominance of one eye could be excluded.
Writing The Long Goodbye by [[Mark Coggins]]
Experimental evidence sug-gests shading similar to that expected from FPV may lead toincreased phytoplankton biomass and reduced macrophytebiomass,50 though this remains to be tested
Inferring how primary producers respond to light and shade using experimental data is an example of an experiment-based method.
Using ourmeasured emissions per kWh, we can estimate that at present,FPV-derived GHG emissions from waterbodies are 6.7 GgCO2-eq year−1 (assuming ∼1000 kWh kWp−1). At modeledpractical potential generation of 9434 TWh year−1, FPV-derived waterbody GHG emissions may increase to 24.6 TgCO2-eq year−1.
It uses a method that scales up global emissions based on measured GHG output per unit of energy (like per kWh).
we estimate a 26.8% increase in greenhouse gasemissions following FPV installation using a carbon dioxide-equivalent basis.
This sentence explains a method that estimates greenhouse gas emissions using the CO₂-equivalent standard.
Rates of bubble accumulation were similarbetween pond types (p = 0.955; Figure S4A), so any changesin CH4 ebullition associated with FPV installation must havebeen driven by differences in bubble CH4 concentration�indeed, the CH4 concentration in bubble trap headspace inponds with FPV (60.0 ± 4.70% CH4) was nearly twice as highas in ponds without FPV (34.4 ± 4.00% CH4; p < 0.001;Figure S4B).
Measurement of bubble accumulation rate is an experimental monitoring technique.
Combin-ing measured dissolved gas concentrations and k600 values toestimate diffusive CO2 and CH4 flux, we found that, onaverage, whole-pond diffusive CO2 emissions were 23.6 ±7.50% lower and diffusive CH4 emissions were 17.5 ± 25.1%lower following FPV deployment (Figure 6A and Table S4).
Diffusive flux calculation through numerical integration is a quantitative modeling method.
Using a BACI approach, we demonstrate that FPV deploymentwith 70% coverage led to increased pond GHG emissionswithin days of deployment, and this effect lasted for weeks tomonths. Increased emissions were driven by greater CH4ebullition which offset reduced diffusive CO2 and CH4emissions in FPV-covered ponds.
BACI (Before-After Control-Impact) is a classic ecological experimental design.
wecalculated whole-pond diffusive flux, assuming edge area forboth control and treatment ponds was 270 m2, the pond centersurface area for control ponds was 630 m2, pond center surfacearea for treatment ponds was 270 m2 (this subtracts the totalarea of FPV array that is in physical contact with the watersurface), and that fluxes were constant over a 24 h period.
The method for scaling experimental results to the whole pond is described.
We compared GHG dynamicsbetween ponds with and without FPV using a mixed modelapproach in R Statistical Software following a BACIapproach.
Descriptions of the statistical analysis methods are provided.
measuring linear rates ofCO2 and CH4 accumulation (or depletion) in a floatingchamber (18.93 L; 0.071 m2 cross-sectional area) connected toa cavity-ringdown spectroscope (Los Gatos, Inc.) for 5 minand collecting surface water and air samples for analysis of CO2and CH4 concentrations from the same location immediatelyafter the 5 min incubation period as described previously
The measurement approach using floating chambers is described.
Diffusive exchange of dissolved gases between ponds and theatmosphere (mmol m−2 h−1) can be calculated from dissolvedgas concentrations as35k C Cdiffusive flux ( )x water air=where Cwater and Cair indicate the gas concentration (μmol L−1)in the water and atmosphere, respectively
Equations used to calculate diffusive flux are described.
We calculatedebullitive flux asVebullitive flux CH bubble volumefunnel area time4m= [ ] ×× ×where [CH4] is the concentration of CH4 in the trap (μL L−1)and Vm is the molar volume of gas at standard conditions (22.4L mol−1).
Specific equations used for calculations are explained.
We deployed passive bubbletrap samplers from May to October 2023 to measure rates ofebullitive CH4 flux.
The use of bubble traps and how they are utilized is described.
Wecalculated dissolved gas concentrations using constantsdetermined by Weiss32 and Wiesenburg and Guinasso.
Constants used in calculations and literature-based methods are mentioned.
using a gas chromatograph equipped with a flameionization detector and autosampler (Shimadzu GC 2014).
The name of the analytical instrument is specifically mentioned.
We sampled for dissolved GHGconcentrations in pond surface water on two occasions in2022, and 14 occasions in 2023 using a headspace equilibrationapproach.
The headspace equilibration method is described.
We characterized the temperature and dissolved oxygenconcentrations of the water column in each pond using athermistor and an optical dissolved oxygen sensor attached to aManta +35 or a Manta +20 instrument (Eureka Water Probes,Austin, TX).
Specific instrument and sensor names are explicitly stated.
Floating solar arrays (Ciel etTerre International, France) were deployed on three ponds:the FPV array on pond 124 was constructed from June 15−29,2023, pond 123 from June 29 to July 14, 2023, and pond 125from September 18−28, 2023.
The source of the installation equipment and the details of the experimental setup are explicitly provided.
Using these measure-ments, we calculated diffusive CO2 and CH4 emissions andcompared total GHG emissions between ponds with andwithout FPV.
The calculation method and the comparison reference are described.
We measured water column temperature,dissolved oxygen saturation, and dissolved CO2 and CH4concentrations in surface and bottom waters, quantified ratesof CH4 ebullition, and determined treatment-specific air−watergas exchange rates (i.e., k600 values)
The specific measurement parameters used in the experiment, along with the calculated coefficient k600, are mentioned.
We deployed FPV arrays on constructed ponds at the CornellExperimental Pond Facility in New York, USA in summer2023 (Figure 1). Arrays were designed to maximize powerproduction potential and thus also potential impacts (70%panel coverage)
The installation method and the design intention of the PV experimental array are specifically described.
Here, we report results from the first two years of anecosystem-scale experiment used to test the effect of FPVdeployment on GHG dynamics and atmospheric GHGexchange in pond
This sentence presents a method using ecosystem-scale experiments to measure the effect of FPV on GHG exchange.
Here, we usean ecosystem-scale experiment to assess how GHG dynamics in ponds respond toinstallation of operationally representative FPV
This sentence describes the use of ecosystem-scale experiments as a tool to measure GHG dynamics before and after FPV installation.
. In order to establish the mentionedcountry-specific database, a national strategy should belaunched.
A methodological proposal for building a database is presented.
The GWP results of the base scenario of this study is 5.24g CO 2 eq/kWh.
The GWP calculation is an output of tools commonly used in LCA.
Realdata based on monthly energy generation collected from awind farm for a year indicates capacity factors ranging from21 to 64% with an average value of 40%.
It demonstrates the collection and application of actual measured data.
The results obtained by changing the recycle ratioof metals at EoL are given in Fig. 5.
An LCA analysis tool was used to derive results under varying conditions.
In this part of the study, two scenarios involving the trans-portation of the main units during the construction phase andthe metal recycling ratios at end-of-life (EoL) are analysed.
A scenario analysis methodology was employed.
The main uncertainty in thisLCA is the considered amount of recycling in the futureafter decommissioning the wind farm, and the impact of thisuncertainty on the results is handled by performing scenarioanalysis for various recycling ratios.
Scenario analysis represents a typical methodological approach to addressing the uncertainty inherent in LCA.
Electricity consumption during the manufacturing andinstallation stage is strictly measured by the service providerto determine the cost. Similarly the exact amount of dieselconsumption is obtained from the facility records.
Measurement and record-based data collection methods are utilized as tools in the study.
The contribution of various phases of life cycle to environ-mental impact categories are shown in Fig. 3.
Data is derived through visualization of the results obtained from the LCA analysis method.
The LCA study is performed as given in ISO 14040/14044standards (ISO 2006a, b). Therefore, goal and scope defini-tion, inventory analysis, impact assessment and interpreta-tion are conducted in an iterative way.
The LCA implementation procedure, the use of international standards (ISO 14040/14044), and the iterative steps involved describe the specific methodological tools and processes employed in the research.
The objective of this study is to apprise the envi-ronmental impacts of a full-scale wind farm via LCA meth-odology in a cradle to grave scope.
This sentence outlines the methodological framework of this particular study(LCA over the entire life cycle).
The study by Jianget al. (2018) examines the environmental impacts of gearboxvia LCA.
Clearly states LCA as the method applied to study a specific component (gearbox), thus demonstrating methodology in practice.
. LCA is used to examine the environ-mental impacts of a wind farm with 76 turbines of 1.5 MWin another study (Ozoemena et al. 2018)
This explicitly describes the use of Life Cycle Assessment (LCA) as the method employed in the cited study.
The aim of this study is to investigate the environmental impacts of a full-scale wind farm using life cycle assessmentmethodology.
It clearly states the method used in the study (LCA), aligning directly with the “tools and methods” category.
We therefore adviseresearchers to earn trust and foster healthy working relation-ships with Indigenous peoples to determine research prioritiesand agreements long before data collection begins (Lake et al.2017)
It outlines concrete practices for building trust and setting priorities prior to data collection.
Research design should then unfold in acollaborative and transparent manner, with input from IKholders (Adams et al. 2014
It clearly explains the collaborative approach in the research design process and the methodological inclusion of IK holders.
At the onsetof collaborative studies, scientists should first develop researchagreements with Indigenous peoples in whatever form islocally appropriate, a step independent of any institutionalethics approvals
It presents specific methodological procedures that must be undertaken during the early stages of research, such as the establishment of research agreements.
McBride et al. (2017) usedParticipatory Geographic Information Systems that drew uponand analyzed IK observations from Indigenous peoples acrossthe US related to fuel load, forest type, and burn severity.
It is a specific example of tool use that combines GIS technology with IK.
trackers could provideauxiliary natural history data whereas radio tracking waslimited solely to data on movement.
It compares the types of data each method can collect, highlighting the advantages of IK-based tools.
Attum et al. (2008) demon-strated that estimates of Egyptian tortoise (Testudo klein-manni) home ranges in North Sinai, Egypt, derived fromradio telemetry were in agreement with estimates byIndigenous people, who tracked tortoises on foot,
It presents a specific methodological comparison between two tools: radio telemetry and direct tracking.
In the example mentioned above,Riedlinger and Berkes (2001) also described how Inuit observa-tions and hypotheses of climate change in northern Canadacould account for multiple interacting variables and ecologicalcomplexity, such as climate variability and sea-ice break up.
The approach of using observation and hypothesis to explain complex system variables reflects a methodological aspect.