81 Matching Annotations
  1. Last 7 days
  2. Jul 2020
  3. Jun 2020
    1. he DBMS performs the best on both the low-contention and high-contention workloads with the Oracle/MySQLand NuoDB configurations. This is because these systems’ stor-age schemes scale well in multi-core and in-memory systems, andtheirMV2PLprotocol provides comparatively higher performanceregardless of the workload contention. HYRISE, MemSQL, andHyPer’s configurations yield relatively lower performance, as theuse ofMVOCCprotocol can bring high overhead due to the read-settraversal required by the validation phase. Postgres and Hekaton’sconfigurations lead to the worst performance, and the major reasonis that the use of append-only storage withO2Nordering severelyrestricts the scalability of the system. This experiment demonstratesthat both concurrency control protocol and version storage schemecan have a strong impact on the throughput.

      结论不同数据库比较

    2. We also observed that the performance of a MVCC DBMS istightly coupled with its GC implementation. In particular, we foundthat a transaction-level GC provided the best performance with thesmallest memory footprint. This is because it reclaims expiredtuple versions with lower synchronization overhead than the otherapproaches. We note that the GC process can cause oscillations inthe system’s throughput and memory footprint.

      结论GC

    3. Lastly, we found that the index management scheme can alsoaffect the DBMS’s performance for databases with many secondaryindexes are constructed. The results inSect. 7.5show that logicalpointer scheme always achieve a higher throughput especially whenprocessing update-intensive workloads.

      结论索引管理

    4. We observed that the performanceof append-only and time-travel schemes are influenced by the effi-ciency of the underlying memory allocation schemes; aggressivelypartitioning memory spaces per core resolves this problem. Deltastorage scheme is able to sustain a comparatively high performanceregardless of the memory allocation, especially when only a subsetof the attributes stored in the table is modified. But this schemesuffers from low table scan performance, and may not be a good fitfor read-heavy analytical workloads.

      结论Version Storage

    5. Overall,we found thatMVTOworks well on a variety of workloads. Noneof the systems that we list in Table 1 adopt this protocol

      结果CCP

    6. performance gap is enlarged to 40% with the number of secondaryindexes increased to 20.Fig. 23further shows the advantage of logi-cal pointers. The results show that for the high contention workload,the DBMS’s throughput when using logical pointers is 45% higherthan the throughput of physical pointers. This performance gapdecreases in both the low contention and high contention workloadswith the increase of number of threads

      索引管理:Logical pointer更优

    7. The results in Fig. 22b show that under high contention, logicalpointer achieves 25% higher performance compared to physicalpointer scheme

      索引管理:Logical pointer更优

    8. The results in Fig. 20a indicate that transaction-levelGCachievesslightly better performance than tuple-levelGCfor theread-intensive,but the gap increases to 20% inFig. 20bfor theupdate-intensiveworkload. Transaction-level GC removes expired versions in batches,thereby reducing the synchronization overhead. Both mechanismsimprove throughput by 20–30% compared to when GC is disabled.Fig. 21 shows that both mechanisms reduce the memory usage

      GC:不同level比较2

    9. The results in Fig. 18 show thatCOOPachieves 45% higherthroughput compared toVACunder read-intensive workloads. InFig. 19, we see thatCOOPhas a 30–60% lower memory footprintper transaction thanVAC. Compared toVAC,COOP’s performanceis more stable, as it amortizes the GC overhead across multiplethreads and the memory is reclaimed more quickly. For both work-loads, we see that performance declines over time when GC isdisabled because the DBMS traverses longer version chains to re-trieve the versions. Furthermore, because the system never reclaimsmemory, it allocates new memory for every new version.

      GC:不同level比较

    10. the append-only and time-travel schemes’performance is stable regardless of the number of modified attributes.As expected, the delta scheme performs the best when the numberof modified attributes is small because it copies less data per version.But as the scope of the update operations increases, it is equivalentto the others because it copies the same amount of data per delta

      Version Storage比较modified attributes

    11. We use transaction-level background vacuuming GC andcompare the orderings using two YCSB workload mixtures. We setthe transaction length to 10. We fix the number of DBMS threads to40 and vary the workload’s contention level.

      Version Chain Ordering: N2O vs O2N

    12. We use theYCSBworkload mixtures inthis experiment, but the database is changed to contain a singletable with 10 million tuples, each with one 64-bit primary key and avariable number of 100-byte non-inlineVARCHARtype attributes. Weuse theread-intensiveandupdate-intensiveworkloads under lowcontention (θ=0.2) on 40 threads with each transaction executing 10operations. Each operation only accesses one attribute in a tuple.

      Version Storage: 标准setup

    13. MVOCCis more likely to abortNewOrdertransactions, whereas thePaymentabort rate inMV2PLis 6.8×higher thanNewOrdertransactions. These two transactionsaccess the same table, and again the optimistic protocols only de-tect read conflicts inNewOrdertransactions in the validation phase.SI+SSNachieves a low abort rate due to its anti-dependency track-ing, whereasMVTOavoids false aborts because the timestampassigned to each transaction directly determines their ordering

      TPC-C:比较

    14. MVOCCincurs wastedcomputation because it only detects conflicts in the validation phase.

      TPC-C: MVOOC still sucks

    15. there is not a great difference among the protocols exceptMV2PL; they handle write-write conflicts in a similar way and againmulti-versioning does not help reduce this type of conflicts

      MV2PL: WW Conflict

    16. Beyond this contention level, the performanceofMVOCCis reduced by∼50%. This is becauseMVOCCdoesnot discover that a transaction will abort due to a conflict until afterthe transaction has already executed its operations. There is nothingabout multi-versioning that helps this situation.

      MVOCC: reduced performance

    17. TPC-C:This benchmark is the current standard for measuring theperformance of OLTP systems [43]. It models a warehouse-centricorder processing application with nine tables and five transactiontypes. We modified the original TPC-C workload to include a newtable scan query, calledStockScan, that scans theStocktable andcounts the number of items in each warehouse. The amount of con-tention in the workload is controlled by the number of warehouses

      Benchmark: TPC-C -- nine tables & five transaction types

    18. YCSB:We modified the YCSB [14] benchmark to model differ-ent workload settings of OLTP applications. The database containsa single table with 10 million tuples, each with one 64-bit primarykey and 10 64-bit integer attributes. Each operation is independent;that is, the input of an operation does not depend on the output ofa previous operation. We use three workload mixtures to vary thenumber of reads/update operations per transaction: (1)read-only(100%reads), (2)read-intensive(80%reads, 20%updates), and(3)update-intensive(20%reads, 80%updates). We also vary thenumber of attributes that operations read or update in a tuple. Theoperations access tuples following a Zipfian distribution that is con-trolled by a parameter (θ) that affects the amount of contention (i.e.,skew), whereθ=1.0 is the highest skew setting.

      Benchmark:YCSB -- 独立操作,三个workloads

    19. For each trial, we execute the workload for 60 seconds to let theDBMS to warm up and measure the throughput after another 120seconds. We execute each trial five times and report the averageexecution time.

      标准

    20. We execute all transactions as storedprocedures under theSERIALIZABLEisolation level

      实验用的最高isolation level

  4. May 2020
  5. Apr 2020
  6. Feb 2020
    1. Robinson Crusoe’s experiences are a favourite theme with political economists

      Marx refers to the thought experiment, common in economics, which is sometimes called Robinson Crusoe economics.

      Doing "Robinson Crusoe economics" consists in imagining what can be learned, if anything, from a one agent economy that will provide insight into a real world economy with lots of agents.

  7. Nov 2019
    1. I have been looking for something like this for years.

      This is a test of the experimental version of Hypothesis that does not open and cover the window by default. I want to add this functionality to my blog because I am looking for more feedback on this blog and I am hoping that there are more people out there interested in annotating the web.

  8. May 2019
    1. Genomic DNA Preparation: Genomic DNA, used as a template for PCR, was isolated from approximately 20 wild-type flies. Flies were added to 125ul Homogenisation buffer (200mM sucrose, 100mM Tris-HCl pH 8.0, 50mM EDTA, 0.5% SDS) and ground using a pestle. The mixture wasthen incubated at 67°C for 10mins. Subsequently, 1.5M KAc was added and incubated on ice for 10mins, followed by DNA extraction using an equal volume of phenol chloroform. The mixture was centrifuged at 16,000g and the DNA precipitated using 0.3M NaAc andethanol. The DNA pellet was then resuspended in 25μl of TE with 25ug RNaseA. PCR:Unless otherwise stated, all PCR reactions were performed using Phusion High Fidelity DNA Polymerase (NEB). PCR reactions were carried out at either 20μl or 50μl with the following reaction setup: 1x GC or HF Buffer, 200μM dNTPs, 0.5 μM of both primers, 1 Unit of Phusion and a maximum of 200ng of DNA. Thermocycling conditions used were as per the manufacturers instructions with a minimum of 35 PCR cycles at an elongation rate of 30s/kb at 72°C. Elongation time was adjusted as appropriate for the PCR product. Where necessary Tm was optimised using gradient PCR. All PCR reactions were performed on a BIO-RAD T100 Thermal Cycler. Both PCR purification and Gel extraction were performed using the NucleoSpin Gel & PCR Clean up kit (Macherey-Nagel), as per the manufacturers instructions. Unless otherwise specified, all primers used in this thesis were designed using NCBI’s Primer-BLAST, selecting against any primers or primer pairs that would produce unspecific products (Ye et al., 2012).
    2. Molecular Biology Protocols
    3. Experimental Methods
  9. Aug 2018
    1. Open to Exploration

      Internet Archaeology is trialling a new feature by Hypothes.is which enables everyone to make annotations on journal content. To get started, just select any text in any article and add your annotation for everyone to see (Annotations are public by default but highlights are private, visible only to you when you’re logged in to your Hypothes.is account). You can even share your annotations on social media,

      Try this for more info https://web.hypothes.is/blog/varieties-of-hypothesis-annotations-and-their-uses/

      I'm interested to see how everyone uses it!

    1. Internet Archaeology is trialling a new feature by Hypothes.is which enables everyone to make annotations on journal content. To get started, just select any text in any article and add your annotation for everyone to see (Annotations are public by default but highlights are private, visible only to you when you’re logged in to your Hypothes.is account). You can even share your annotations on social media,

      Try this for more info https://web.hypothes.is/blog/varieties-of-hypothesis-annotations-and-their-uses/

      I'm interested to see how everyone uses it!

  10. Dec 2017
    1. To determine whether the Aβ toxicity–limiting effects of p38γ were tau-dependent, we crossed APP23.p38γ−/− with tau−/− mice.

      The basic question for Fig. 2: Is the protective effect that p38y has against Amyloid beta toxicity only seen in mice that have tau proteins. Experiments were ran on mice that had various combinations of p38y and tau.

    2. To test whether p38γ−/− augments Aβ-induced deficits, we crossed p38γ−/− with Aβ-forming APP23 mice.

      Fig 1 C-J experiments are testing to see if the abscence of p38y increases amyloid beta deficiets. Mice that have amyloid beta deficets were bred with mice that did not have p38y kinase. Amyloid beta - forming APP23 mice have amyloid beta induced deficits.

    3. We used mice with individual deletion of p38α, p38β, p38γ, or p38δ

      Each mouse had one isoform of the p38 kinase eliminated from its cellular biology. All other forms are present.

    4. To understand the roles of p38 kinases in AD, we induced excitotoxic seizures with pentylenetetrazole (PTZ), an approach widely used for studying excitotoxicity in AD mouse models

      The basic goal of the research was to understand the role of p38 kinases in AD mice. The experiment used mice that were made to have excitotoxic seizures through injection of pentylenetetrazole (PTZ). Injection of PTZ has been used in the past to study excitotoxicity in AD mice models.

  11. Jan 2017
    1. prospective interviewee,

      Just a side spiel: In terms of an interviewee and data, everything really is data. I'll be interviewing freshmen next semester with other SLU students and some things I have already told the group to take note of in notebooks (ha ha) are the different responses the interviewee gives. In a way, the sad little freshmen turn into our experiment. Everyone in the group records a different response. These responses include the obvious oral responses, body language, and tone of voice.

  12. Dec 2016
    1. Written by rchase, feel free to edit. It's an experiment.

      Since I took the notes using hypothesis, the password to edit is hypothesis.

  13. Sep 2016
    1. EDA (ectodysplasin A)

      This is a protein involved in cell signalling between two layers of skin (ectoderm and mesoderm) , which is especially important in embryo formation.

      This allows the formation of hair follicles, sweat glands, and teeth.

  14. May 2016
    1. all editions

      6600 files were downloaded; 333 files appear to be these missing editions with the placeholder text. I have not yet manually verified all of this... which is partly the point, right?

    2. each text file

      6267 print editions, from 1893 - 2010

    3. .75 range

      that is, from .72 to .9. They all have the same placeholder text, but the quality of the ocr makes some consistent errors, which is interesting.

  15. Jan 2016
    1. After some classroom experimentation, a more specific list of smart technology use criteria emerged

      As we transform the "HOW" of what we do to support schools, encouraging safe fail experimentation will be important. The design process supports this and iteration consists of success and failure.

    2. This template essentially encouraged the participating teachers to do—and document—action research.

      I think this "formalizing" of teacher and student experimentation is really important. Ensuring that this newfound knowledge (for better or worse) is shared with colleagues is pivotal.