1,653 Matching Annotations
  1. Apr 2021
    1. Indians were not to be appeased—and certainly not brought into British public life.
      • Indians were kept from entering politics or public life
      • After the British gov take over in 1858, the British gov seeked to not only have direct control instead of a company's control, but also direct control instead of letting Indians have control
    2. India Company violated its treaty obligations
      • The East India Company used excuses to violate treaties and conquer more land
      • Ex: In 1856, the British took Nawab Wajid Ali Shah's territory on grounds that he was weak and immoral
    3. The government also decided to collect taxes directly from peasants, displacing the landed nobles as intermediaries.
      • British took taxes from peasants without letting Indian previous positions of power interfere
      • Peasants had to take out loans from moneylenders
      • Peasants who couldn't pay their loans would have to give up their land
    4. replacing East India Company rule by crown government in 1858,
      • The British government took over ruling India instead of the EIC in 1858
      • Perhaps the British government took over ruling India instead of the East India Company because after the uprising, ruling over colonial states was viewed as a more militant and political task than an economic one.
    5. In return, the Mughal emperor would receive a hefty annual pension.
      • People originally in power in colonized territories striking a deal with the colonizers for their benefit is a common pattern
    6. The East India Company’s Monopoly
      • The British started off trying to control just India's trade by setting trading posts on the ocean
      • Around the early 1800s, the East India Company conquered many areas in India and surrounding areas

      Economic control -> political, military control

    7. During the first half of the nineteenth century the British rulers of India had dismantled most of the traditional powers of the nobility and the rights of peasants.

      The first half of the 1800s saw:

      • the disappearance of original Indian positions of power
      • the disappearance of original Indian state boundaries and sovereignties in 1848 by governor-general Lord Dalhousie
    1. Read chapter 11 "Memorizing Number" to see what Gardner says about available techniques. He only covers the phoenetic major system and some basic associative techniques.

      No mention of the method of loci. Some interesting references listed for the chapter however.

    Tags

    Annotators

  2. Mar 2021
    1. ‘bold’ means to have many observational consequences

      As I said at the wiki I saw this link from, you don't test the hypothesis directly, you test the predictions, the "observational consequences", from the hypothesis.

    1. Cailin O’Connor. (2020, November 10). New paper!!! @psmaldino look at what causes the persistence of poor methods in science, even when better methods are available. And we argue that interdisciplinary contact can lead better methods to spread. 1 https://t.co/C5beJA5gMi [Tweet]. @cailinmeister. https://twitter.com/cailinmeister/status/1326221893372833793

  3. Feb 2021
  4. Dec 2020
    1. A scientist who does not utilize the scientific method is as much use as a carpenter who cannot make chairs or a plumber who cannot fix toilets. A science that exists as a fixed absolute, whose premises are not to be questioned, whose data is not to be examined and whose conclusions are not to be debated, is a pile of wood or a leaky toilet. Not the conclusion of a process, but its absence.

      Understanding science is a process.

  5. Nov 2020
    1. This is fascinating. I recognize it as something that is common knowledge to trained Feldenkrais Method practitioners. I have gained similar benefits from visiting a Feldenkrais practitioner and many more musculo skeletal benefits as well. They use very gentle, gradual and time-consuming "movement training" to "reset" muscular habits, many of which are harmful. It is amazing how much they can improve in every part of the musculo-skeleture, there are hundreds of muscles in complex patterns that can be getting inefficiently and parasitically employed. It can make a massive difference to the ease of daily life.When it comes to posture, trying to force the correct posture is hopeless; what is needed, is to "switch off" the muscles that shouldn't be "on", and vice versa. Most people can't just do that by mimicking a position. In Feldenkrais, mostly you are relaxed and lying on your back, and the practitioner gently performs repetitive counter-intuitive movements of your limbs etc which "persuade" muscles a certain way. Then when you sit up or stand up again, you tend to be "relaxed in the correct posture" rather than trying to force yourself into it.
    1. What is the STAR interview method?The STAR interview method is a technique you can use to prepare for behavioral and situational interview questions. STAR stands for: situation, task, action and result.This method will help you prepare clear and concise responses using real-life examples.Hiring managers ask behavioral interview questions to determine whether you are the right fit for a job. By using the STAR strategy, you can make sure you’re fully addressing the interviewer’s question while also demonstrating how you were able to overcome previous challenges and be successful.

      The [[STAR method]] can help people prepare for [[behavioural interview]] questions

    1. Case study analysis of the communities of Shishmaref, Newtok,Kivalina, and Quinhagak, Alaska, USA, was the primary methodused to understand the environmental impacts

    Tags

    Annotators

    1. διαδικασία Definition from Wiktionary, the free dictionary Jump to navigation Jump to search Greek

      Greek Noun

      διαδικασία • (diadikasía) f (plural διαδικασίες)

      1. procedure, process, method, protocol
      2. (computing) function, subroutine, procedure
  6. Oct 2020
    1. An Evaluation of Problem-based Learning Supported by Information and Communication Technology: A Pilot Study

      (Under "Viewing Options", select PDF.) In this article, Ernawaty and Sujono (2019) summarize results of a study funded by the Research and Higher Education Directorate of Indonesia. The study aimed to evaluate the cogency of information and communication technologies (ICTs) in problem based learning (PBL) and traditional teaching methods (TTM) based upon learner test scores. The concepts of PBL, TTM, and implications of ICTs are briefly reviewed. Results of the study revealed that PBL with the support of an ICT yielded the highest test scores. (6/10)

    1. But thirdly, and most valuably, the template gives you a big space at the bottom to write sentences that summarise the page.  That is, you start writing your critical response on the notes themselves.

      I do much this same thing, however, I'm typically doing it using Hypothes.is to annotate and highlight. These pieces go back to my own website where I can keep, categorize, and even later search them. If I like, I'll often do these sorts of summaries on related posts themselves (usually before I post them publicly if that's something I'm planning on doing for a particular piece.)

    1. Weber notes that according to any economic theory that posited man as a rational profit-maximizer, raising the piece-work rate should increase labor productivity. But in fact, in many traditional peasant communities, raising the piece-work rate actually had the opposite effect of lowering labor productivity: at the higher rate, a peasant accustomed to earning two and one-half marks per day found he could earn the same amount by working less, and did so because he valued leisure more than income. The choices of leisure over income, or of the militaristic life of the Spartan hoplite over the wealth of the Athenian trader, or even the ascetic life of the early capitalist entrepreneur over that of a traditional leisured aristocrat, cannot possibly be explained by the impersonal working of material forces,

      Science could learn something from this. Science is too far focused on the idealized positive outcomes that it isn't paying attention to the negative outcomes and using that to better define its outline or overall shape. We need to define a scientific opportunity cost and apply it to the negative side of research to better understand and define what we're searching for.

      Of course, how can we define a new scientific method (or amend/extend it) to better take into account negative results--particularly in an age when so many results aren't even reproducible?

    1. Fieldwork usually means living with and living like those who are studied
    2. o be sure, ethnography has a long history, and its techniques, goals, and representational styles mean different things, not always complementary, to its many cu-rious readers.

    Tags

    Annotators

  7. link-springer-com.uaccess.univie.ac.at link-springer-com.uaccess.univie.ac.at
    1. This leads me to my bet: by analyzing the form and content of futures talk in such sitesof hyperprojectivity, we can understand the mechanisms by which future projectionsaffect decisions, relations, and institutions
    2. I have made a number of theoretical arguments and placed somemethodological bets for future researc
    3. I have made a number of theoretical arguments and placed somemethodological bets for future researc
    4. This leads me to my bet: by analyzing the form and content of futures talk in such sitesof hyperprojectivity, we can understand the mechanisms by which future projectionsaffect decisions, relations, and institutions
  8. Sep 2020
    1. he PSH,1 at present year, counts with a tower of 36 m height, a control room and a field of 29 heliostats. The total of the installed heliostats can be separated in two sizes, as follows: 12 heliostats of 36 m2 (each one having 25 flat mirrors of 1.2 m × 1.2 m); 17 heliostats of 37.44 m2 (each one with 32 flat mirrors of 1.3 m × 0.9 m). The total reflecting-area is close to 1,070 m2. The heliostats installed on the field allow reaching a theoretical solar radiation concentration factor of 25, which corresponds to a thermal power of approximat

      method, control

    2. Table 2. Reference values for WBGT (°C) at corresponding work intensity.

      foundational research to base heat stress on productivity analysis: reference values for WBGT to work intensity

    1. get

      The HTTP method for which the middleware function applies.

      "Routes HTTP GET requests to the specified path with the specified callback functions."

      https://expressjs.com/en/5x/api.html#app.get

      What is GET? via MDN:

      HTTP defines a set of request methods to indicate the desired action to be performed for a given resource.

      The GET method requests a representation of the specified resource. Requests using GET should only retrieve data.

  9. Aug 2020
    1. Guo, L., Boocock, J., Tome, J. M., Chandrasekaran, S., Hilt, E. E., Zhang, Y., Sathe, L., Li, X., Luo, C., Kosuri, S., Shendure, J. A., Arboleda, V. A., Flint, J., Eskin, E., Garner, O. B., Yang, S., Bloom, J. S., Kruglyak, L., & Yin, Y. (2020). Rapid cost-effective viral genome sequencing by V-seq. BioRxiv, 2020.08.15.252510. https://doi.org/10.1101/2020.08.15.252510

  10. Jul 2020
  11. Jun 2020
  12. May 2020
    1. Mei, X., Lee, H.-C., Diao, K., Huang, M., Lin, B., Liu, C., Xie, Z., Ma, Y., Robson, P. M., Chung, M., Bernheim, A., Mani, V., Calcagno, C., Li, K., Li, S., Shan, H., Lv, J., Zhao, T., Xia, J., … Yang, Y. (2020). Artificial intelligence for rapid identification of the coronavirus disease 2019 (COVID-19). MedRxiv, 2020.04.12.20062661. https://doi.org/10.1101/2020.04.12.20062661

  13. Apr 2020
  14. Dec 2019
    1. ReconnaissanceFirst, we try to gather as much information as possible. Because our success depends on this information. Here we do search about the target, Find the social information, Find the technology used. We do it manually and automatically.Vulnerability AnalysisWhen we will have enough information, we start vulnerability analyzing. For example, In this step, we figure out what Ports are open, What operating system the target is using and the version of software used.Here we use some commercial, open-source tools and manually to find the exploitable vulnerability.ExploitationOur goal is not just to find a vulnerability. We have to get access to you and do the thing you wanted us to do. Successfully exploitation completely depends on the previous two-phase. It depends on how hard we worked there.For example, If we found an overflow type vulnerability, then here we write an exploit. In this step, we need more hard to exploit the vulnerability because most of the thing we need to do manually.Post Exploitation and Covering TracksThe job is not done yet. After successfully hacking(Exploiting) into the system. Our future access depends on this phase. Here the hacker will install an advanced backdoor. And clean all the logs very carefully.
  15. Nov 2019
  16. Aug 2019
  17. elizabethreddy.files.wordpress.com elizabethreddy.files.wordpress.com
    1. Through comparing data with data, we learn what our research partic-ipants view as problematic and begin to treat it analytically.
    2. we try to understand participants' views and actions from their perspectives
    3. We want to know what is happening in the setting, in people's lives, and in lines of our recorded data. Hence, we try to understand our participants' standpoints and situations, as well as their actions within the setting.
    4. attempt to portray meanings and actions
    5. For sociologists, generic processes are basic to social life; for psychologists, generic processes are fundamental for psychological exis-tence; for anthropologists, these processes support local cultures. Because they are fundamental, generic processes can apply in varied professions and fields. A grounded theorist can elaborate and refine the generic process by gathering more data from diverse arenas where this process is evident.

      this was done by including similar creators at opposite ends of the political spectrum, and seeing what exactly the generic processes were in their creations, if any.

    6. In a memo, raise them to conceptual categories for your developing analytic framework-give them conceptual definition and analytical treatment in narra-tive form in your memo.

      as i did in describing each gerund in a separate document, to ensure i had the same intention each time.

    7. Precollj:_eptions work their way into how we think and write. Researchers who believe themselves to be objective social scientists often assume that their judgments of participants are correct. This stance can lead to treating one's unexamined assumptions as fact.

      a point to consider in my research of two very different creators, of whom i have very strong but polar opposite opinions.

    8. Some respon-dents or events will make explicit what was implicit in earlier statements or events.

      the goal of the second round of coding i performed.

    9. In vivo codes can provide a crucial check on whether you have grasped what is significant

      like Molyneux's labeling of Mandela as a sociopathic idiot

    10. we look for their implicit meanings and attend to how they construct and act upon these meanings. In doing so, we can ask, what analytic category(ies) does this code suggest?

      an example is how Molyneux describes the characters of his narrative, most interestingly in his framing of Muslims. he never alludes that these are an ethnographically and religiously diverse group with a rich history, which is either something he genuinely, or something he does not think the audience needs to do, perhaps somewhere between the two. the quotes he uses and descriptions he gives them provides us with some insight, but for the coding process, it's safer to use the term he does before drawing conclusions

    11. Thomas (1993) says that a researcher must take the familiar, routine, and mundane and make it unfamiliar and new.

      acknowledging and reflecting upon my experiences with the creators i've included enables me to ask myself if i'm assigning a certain meaning because i've come to expect it, or if it's something new i'm learning about them or their community.

    12. Concrete, behavioristi.c descriptions of people's mundane actions may not be amenable to line-by-line coding, particularly when you observed a scene but do not have a sense of its context, its participants, and did not interact with them.

      this is partially applicable to my study, since i had no direct interaction with the subjects, and have to infer some meaning based on my experiences with the field prior to collecting fieldwork. my method began with line-by-line coding, but the revision of the codes adopted tips for incident-to-incident coding, since i began comparing similarly themed segments based on their topic or tone, rather than taking each line individually. this was of use for identifying where the speaker chose to make a speculation versus a quote, a statistic versus an anecdote, and theorising what this told me about their relationship to the content.

    13. How does the research participant(s) act while involved in this process?

      this was a guiding question for me when describing the tone in each code, which added depth to the person reaction they were experiencing but not necessarily addressing when focusing on different concepts. for example, when were they excited, when were they solemn, when were they sarcastic.

    14. Line-by-line coding works particularly well with detailed data about funda-mental empirical problems or processes whether these data consist of inter-views, observations, documents, or ethnographies and autobiographies.

      the appropriate method for studying the youtube content, since i can break up the speech into sentences or fragments of sentences which are dictated by the speaker, or for comments which are usually short, or the longer ones are separated by punctuation

    15. If you ignore, gloss over, or leap beyond participants' meanings and actions, your grounded theory will likely reflect an outsider's, rather than an insider's view.

      this was a risk during my first readings, since some of the codes i applied were definitely my own interpretations rather an attempt to describe the speaker's perspective, for example, initial codes that assumed the speaker was 'viewing' a topic in a certain way, did not appreciate whether their view was a personal speculation being offered, an overt expression of an emotional response, an assertion of facts, etc. these clarifications were necessary to build my theory of toolsets that youtube creators use to convey meaning

    16. starting from the words and actions of your respondents, preserves the fluidity of their experience and gives you new ways of looking at it

      in my own process, the words i took from the speaker more alluded to their projected convictions on their subject of choice, rather than the action they were performing more literally. for example, i frequently described what Molyneux was implying, using his phrasing to describe what the implication was, though at first it may not have been obvious that it was an implication rather than, say, an assertion.

    17. Codes are also provisional in the sense that you may reword them to improve the fit. Part of the fit is the degree to which they capture and condense meanings and actions.

      i had two phases of initial coding, first having very open and multifaceted descriptions, second narrowing in on the precise methods the speaker was using to communicate

    18. Hence, simultaneous data collection and analy~is can help you go further and deeper into the research problem as well as engage in developing categories.

      at the beginning of my research, i planned on just analysing youtube comments, but saw soon after that the codes i could offer were bare without the content which prompted them.

    19. Thus we define what we see as significant in the data and describe what we think is happening.

      there is no way around this, i cannot code for something i don't see or understand. the best I can do is be detailed in my descriptions and thorough in simultaneously watching and listening to the content, until i see from my notes what patterns exist. this can only happen through repeated an honest interactions with the data, acknowledging personal convictions and placing them aside to consider why these words are urgent to the speaker.

    20. Coding impels us to make our participants' language problematic to render an analysis of it. Coding should inspire us to examine hidden assumptions in our own use of language as well as that of our participants.

      the phrasing we use in codes brings to light the type of themes we may be subconsciously looking for. in this case, i found i was repeatedly trying to describe what the creators were saying without really saying it, evidenced through their tone and word choice, where they chose to make overt statements, and when meanings were implied.

    21. scrutinize your data and define meanings within it Through this active coding, you interact with your data again and again and ask many different ques-tions of them. As a result, coding may take you into unforeseen areas and new research questions.

      i was asking very broad questions at the start of my research to accommodate this - how do people communicate, connect, relate, express etc on youtube, is there a common language, or common habits that exist within or between videos and channels, and does the channel creator play a role in this? nearing the end, i find i'm answering the question of how a creator conveys meaning and how they prompt reactions, and how does the community respond to this.

    1. summary of a wide range of theories and models of onlinemisogyny from the feminist literature, as well as an analy-sis of the works that have targeted the problem of onlinemisogyny from a computational perspective.•The translation of different categories of misogyny, identifiedin feminist theory, into lexicons of hate terms to study theevolution of language within the manosphere.•An in-depth analysis of different manifestations and evolu-tion of misogyny across the Reddit manosphere.•We corroborated existing feminist theories and models aroundthe phenomenon of the manosphere by conducting a large-scale observational analysis.

      need to build a lexicon and refine knowledge of behaviour

    2. Full manual analysis is impractical and thus, automatictechniques need to be used.

      need for shortcuts in analysing discourse

  18. Jun 2019
    1. “I see Posthumanism as a methodology: a conceptual framework that can be applied to the field of graphic design.”

      An important undercurrent in all of our readings is methodology. What methods does posthumanism orientations require, invent, frustrate, occlude?

  19. May 2019
    1. I hope the readerswill take my over-crowded article as an attempt to compose a missingcommunity of posthuman scholars: the essay as assemblage

      Another line of thought I would like for use to pursue is method. Braidotti turns to cartography here as a method (always selective, impartial, mobile). What other methods are suggested, enacted, demonstrated through the readings to respond to what Braidotti calls the "posthuman predicament"?

    1. The reaction mixture contained 0.2 mL of enzyme sample, 0.3 mL of buffer and 0.5 mL of p-nitrophenyl-β-D-glucopyranoside (1.0 mM) prepared in 100 mM buffer as the substrate. The reaction was terminated after 30 min of incubation at 70 °C by adding 2 mL of sodium carbonate-bicarbonate buffer (0.1 M, pH 10.0). The liberation of p-nitrophenol was measured at 400 nm and its yield was determined using a standard curve of p-nitrophenol (1-10 μg mL-1) prepared in sodium carbonate-bicarbonate buffer
    2. β-Glucosidase
    3. The activities ofβ-xylosidase, xylan acetylesterase and arbinofuranosidase were measured using 1 mM p-nitrophenylxylopyranoside, p-nitrophenylacetate and p-nitrophenylarabinofuranoside, respectively prepared in sodium citrate buffer (0.1 M, pH 7.0). One mL of reaction mixture containing 0.2 mL of crude enzyme solution, 0.3 mL of sodium citrate buffer (0.1 M, pH 7.0) and 0.5 mL of substrate was incubated at 80 °C for 30 min. The reaction was terminated by adding 2 mL sodium carbonate-bicarbonate buffer (1.0 M, pH 10.0). The activities were determined using p-nitrophenol standard curve (1-10 μg mL-1) drawn using absorbance values measured in spectrophotometer at 400 nm. One unit of the enzyme is defined as the amount of enzyme that liberates 1μmole of p-nitrophenol mL-1min-1 under assay conditions.
    4. Assays for β-Xylosidase, acetylesterase and arbinofuranosidase
    5. Xylanolytic activity was determined according to Archana and Satyanarayana (1997). The reaction mixture containing 0.5 mL of 1% birchwood xylan in glycine NaOH buffer (0.1 M, pH 9.0) and 0.5 mL of cell free sonicated supernatant was incubated at 80 °C in a water bath for 10 min. After incubation, 1 mL DNSA reagent (Miller, 1959) was added to the reaction mixture and the tubes were incubated in a boiling water bath for 10 min, followed by the addition of 400 μL of 33% w/v sodium potassium tartrate. The absorbance values were recorded at 540 nm in a spectrophotometer (Shimadzu, Japan). The liberated reducing sugars were determined by comparing the absorbance values of these with a standard curve drawn with different concentrations of xylose. One unit (IU) of xylanase is defined as the amount of enzyme required for liberating one μmol of reducing sugar as xylose mL-1 min-1under the assay conditions. Composition of Dinitrosalicylic acid (DNSA) reagent NaOH - 10.0 g Phenol - 2.0 g DNSA - 2.0 g Distilled Water - 1000 mL DNSA reagent was stored in an amber bottle at 4 °C till further use. Sodium sulphite (0.05 % v/v) was added just before the use of the reagent.
    6. Enzyme Assays
    7. A stock solution of xylose (1 mg mL-1) was prepared in distilled water. A dilution series ranging from 100-1000 μg mL-1 was prepared from the stock solution. To 1 mL of solution, 1mL of DNSA was added and kept in a boiling water bath for 10 min and then 400 μL of sodium potassium tartrate solution was added and kept it for cooling. The absorbance was recorded in a spectrophotometer (Shimadzu, UV-VIS) at 540 nm
    8. The clear cell-free supernatants were used as the source of crude recombinant xylanase.
    9. Preparation of standard curve of xylose
    10. Quantitative screening for determination of xylanase in shake flask
    11. Sonicated cells of E. coli having recombinant vector was centrifuged. Supernatant was dispensed into 0.2 % v/v xylan agar plate and incubated for 4 h. The plates were then flooded with Congo red solution (0.2 % w/v) for 30 min and destained with 1M NaCl solution till a clear zone of xylan hydrolysis was visible. The plates were gently shaken on a shaker to accelerate the process of staining/destaining
    12. Qualitative detection of xylanolytic activity by plate assay
    13. DETECTION OF XYLANASE ACTIVITY
    14. Overnight grown cultures of E. coli DH5α, E. coli BL21 (DE3), E. coli XL1blue cells with and without constructs were preserved in 80 % v/v glycerol
    15. MAINTENANCE OF THE RECOMBINANT STRAIN
    16. Metagenomic library obtained from various extracted DNA was screened by replica plating method on 0.3 % w/v RBB xylan containing LB-amp plates. The cells were allowed to grow for overnight at 37 °C and thereafter incubated at 4 °C till the appearance of zone of hydrolysis. A total of 36,400 clones from various environmental samples were screened.
    17. SCREENING OF THE TRANSFORMANTS FOR XYLANASE ACTIVITY
    18. Transformation of calcium-competent cells was carried out by the procedure detailed below: •The competent bacterial cells were thawed briefly and 200 μL of cells was mixed rapidly with plasmid DNA (10-50 ng) in fresh, sterile microcentrifuge tubes and maintained on ice for 30 min. A negative control with competent cells only (no added DNA) was also included. •Cell membranes were disrupted by subjecting cells to heat-pulse (42 °C) for 90 sec. •After heat shock, cells were incubated on ice for 5 min. •Cells were then mixed with 1 mL LB medium and incubated with shaking at 37 °C for 1 h. •For blue/white screening 40 μL of X-gal solution (20 mg mL-1 in dimethylformamide) and 4 μL of the IPTG (200 mg mL-1) was spread on LB-ampicillin (LB-amp) plates with a sterile glass rod. The plate was allowed to dry for 1h at 37 °C prior to spreading of bacterial cells. •Bacterial cells (100-200 μL) were spread and the plate was incubated at 37 °C for overnight. •White colonies were picked from the plates and suspended into LB-amp broth and cultivated to OD600=0.5
    19. Transformation procedure
    20. 2 mL of an overnight culture of E. coli cells was inoculated into 100 mL LB medium and incubated with vigorous shaking at 30 °C until A600 of 0.8 was reached. •Cells were collected in 50 mL plastic (Falcon) tubes, cooled for 15 min on ice and centrifuged in a pre-cooled centrifuge (4,000 rpm for 10 min at 4 °C). •The pellet was suspended in 20 mL of ice-cold 50 mM CaCl2-15% glycerol solution, maintained on ice for 15 min and centrifuged again at 4,000 rpm for 10 min at 4 °C. •Pellet was resuspended in 2 mL of ice-cold 50 mM CaCl2-15 % glycerol solution, kept on ice for 30 min and aliquoted in 400 μL in microcentrifuge tubes. These were stored at -80 °C until required.
    21. Preparation of calcium-competent cells
    22. Preparation of electrocompetent cells (E. coli cells) A protocol was employed. The procedure was carried out in cold under sterile conditions as follows: •A single colony of E. coli DH10B/ DH5α/XL1blue was inoculated in 20 mL of LB medium and grown overnight at 30 °C. •500 mL LB medium was inoculated with 5mL of this overnight grown culture of the E. coli and incubated with vigorous shaking (250 rpm) at 30 °C until an A600of 0.5 - 0.8 was achieved. •The cells were chilled in ice for 10-15 min and transferred to prechilled Sorvall® centrifuge tubes and sedimented at 4,000 rpm for 20 min at 4 °C. •The supernatant was decanted and cells were resuspended in 500 mL of sterile ice-cold water, mixed well and centrifuged as described above. •The washing of the cells described above was repeated with 250 mL of sterile ice-cold water, following which cells were washed with 40 mL of ice-cold 10 % (v/v) glycerol and centrifuged at 4,000 rpm for 10 min. •The glycerol solution was decanted and the cell volume was recorded. The cells were resuspended in an equal volume of ice-cold 10 % glycerol. •Cells were then dispensed in 40 μL volumes and stored at -80 °C until required.
    23. Electrotransformation
    24. BACTERIAL TRANSFORMATION
    25. PurifiedDNA fragments of size 2-8 kb were ligated to the treated vector using a 1:3::vector :insert ratio in a volume of 10 μL. The total amount of DNA was about 0.5 μg. Vector and insert DNA was heated to 45 °C for 10 min and the immediately chilled on ice for 5 min prior to addition of ligase and buffer. T4 DNA ligase (NEB, England) was added to a final concentration of 0.125 UμL-1 and reactions were incubated at 16 °C for overnight in a ligation chamber. Reaction mixture incubated under same condition without addition of the enzyme was used as control. A ligation reaction was also set up under condition with linear plasmid DNA containing the
    26. Ligation of insert DNA with dephosphorylated vector
    27. In order to minimize self ligation of vector during cloning experiments, the digested DNA was subsequently treated with calf intestinal phosphatase (CIP) [NEB, UK]. The reaction conditions and amount of CIP were optimized and varied from (0.06-1) unit/picomole DNA termini. The dephosphorylation reaction was carried out in 50 μL reaction as follows. Reaction mixture containing no restriction enzyme was treated as control. Reaction was incubated for 1 h at 37 °C and stopped by heat inactivation at 65 °C for 20 min. 2.5.5. Composition of restriction mixture (50 μL) Linearized Plasmid DNA X μL (1 μg) CIP 1 μL (0.06-1 U μL-1) Reaction buffer (10X) 5.0 μL Distilled water Y μL Total volume 50 μL Linearized and dephosphorylated plasmids from each reaction were purified from low melting agarose gel using gel extraction method according to the manufacturer’s protocol (Qiagen gel extraction kit, Germany). 100 ng DNA from each reaction was then ligated in15 μL reaction volume containing 1.5 μL of 10X ligation buffer (NEB, England) and 0.2 μL of T4 DNA ligase to check the efficiency of self ligation after dephosphoryaltion. The ligation mixture was incubated at 16 °C for overnight and transformed into E. coli DH5αcompetent cells.
    28. Dephosphorylation of the restricted plasmid
    29. The vector isolated as above was digested with BamHI to generate the cohesive ends. The reaction was performed in 1.5 mL Eppendorf tubes as described below. Composition of restriction mixture (100 μL) Plasmid DNA X μL (20 μg) Bam HI 8 μL (10 U μL-1) NEB buffer 4 10.0 μL BSA (100X) 1 μL MQ water Y μL The reaction mixture was incubated at 37 °C for 3 h. The digestion was stopped by heat inactivation at 65 °C for 20 min. The digestion of plasmid was checked using 1.2 % (w/v) agarose gel electrophoresis for linearization of the plasmid. The digested plasmid was purified from low melting agarose gel using gel extraction method according to the manufacturer’s protocol (Qiagen gel extraction kit, Germany).
    30. Restriction digestion of plasmid DNA
    31. Two hundred μL of alkaline-SDS solution was added to the above suspension, mixed by inverting the tubes up and down 3 times and incubated for 5 min at room temperature. ƒTo the above mixture, 250 μL of 3 M Na-acetate (pH 4.8) was added, mixed by inverting the tubes up and down 3 times, and centrifuged at 12,000 x g for 10 min. ƒThe supernatant was collected in another micro centrifuge tube (MCT), 200 μL of phenol:chloroform solution was added, inverted two times and centrifuged at 12, 000 x g for 8 min at room temperature. ƒThe aqueous phase was transferred to new tubes and 500 μL of chilled (-20 °C) ethanol (96 %) was added. ƒThe tubes were centrifuged at 13,000 x g for 25 min at 4 °C, supernatant discarded and pellet dried for 15 min at room temperature. ƒThe pellet was washed with 500 μL of chilled 70 % (v/v) ethanol and centrifuged at 13, 000 rpm for 4 min at 4 °C. ƒThe pellet was dried at room temperature and dissolved in 50 μL of 1X TE buffer (pH 8.0) containing RNase and stored at -20 °C till further use.
    32. The cells of E. coli DH10B having p18GFP vector were cultivated for overnight at 37 °C in LB medium containing ampicillin (100 μg mL-1). ƒThe E. coli culture having p18 GFP vector (~1.5 mL) was taken in Eppendorf tubes and centrifuged at 10, 000 x g for 5 min. ƒThe pellet was homogenized by vortex mixing in 100 μL of homogenizing solution
    33. Plasmid isolation from miniprep method
    34. The metagenomic DNA extracted from above defined protocol was digested with Sau3A1 at conditions optimized to generate maximum fragment in the size range of 2-6 kb. Different concentration (0.05 to 1 unit) of enzyme was used to optimize the digestion of 1 μg of DNA. Reactions were carried out in a final volume of 30 μl each in an Eppendorf of 1.5 mL. Reaction mixture (1 μg DNA having 3 μL NEB buffer 3 and 0.3 μL of 10X BSA) were kept at 37 °C for 10 min and stopped by heat inactivation at 80 °C for 20 min. Different digested reactions were checked for the desired fragments using 0.8 % (w/v) agarose gel electrophoresis. After optimization of DNA fragments for the appropriate size, a large scale digestion was carried out and the fragments (2-8 kb) were purified from low melting agarose gel using gel extraction method according to the manufacturer’s protocol (Qiagen gel extraction kit, Germany)
    35. Insert DNA preparation
    36. CONSTRUCTION OF METAGENOMIC LIBRARY
    37. An attempt was made to study the effect of storage of DNA extracts on DNA yield and purity. The DNA extracts were centrifuged and the supernatants were dispensed into 2.0 mL Eppendorf tubes and stored at -20 oC for a month. DNA precipitation and its quantification were carried out at a week intervals.
    38. Effect of storage on soil/sediment DNA extracts
    39. Attempts have been made to amplify the signature sequences of bacterial, archaeal and fungal specific regions by using respective sets of primers shown in Table2.2. The reactions were carried out in 50 μL reaction mixtures in a Thermal Cycler (Bio-Rad, USA) using respective primers (Table 2.2). The PCR conditions were optimized as follows: for Bacterial 16S rDNA, initial denaturation of 3 min at 94 oC followed by 30 cycles of 30 sec at 93 oC, 60 sec at 55 oC and 90 sec at 72 oC; Archaeal 16S rDNA, 5 min at 95 oC, 35 cycles of 50 sec at 94 oC, 60 sec at 62 oC and 60 sec at 72 oC; fungal specific ITS regions, 3 min at 95 °C, 30 cycles of 60 sec at 94 °C, 56 °C at 45 sec and 50 sec at 72 °C. Final extension time was 7 min at 72 °C in all PCR runs. Amplifications were visualized on 1.2 % w/v agarose gels
    40. PCR amplification of microbial population
    41. Purity of the DNA extracted from various environmental samples was confirmed by subjecting the extracted DNA to restriction digestion. DNA was digested with Sau3AI (New England Biolabs). One μg of metagenomic DNA in 20 μL reaction mixture was treated with 0.5 U of Sau3AI and incubated at 37 °Cfor 10 min. The reaction was terminated at 80 °C for 20 min and the digested DNA was fractionated on 1.2 % (w/v) agarose gel.
    42. Restriction digestion
    43. VALIDATION OF METAGENOME OBTAINED BY THE PROTOCOL DEVELOPED IN THIS INVESTIGATION
    44. The isolated DNA was diluted (1:100) with MQ. The concentration (mg mL-1) of the DNA [N] was determined spectrophotometrically by recording absorbance at 260 nm (A260) as: A260 = ε 260[N]where ε 260 is the extinction coefficient of DNA (50 for ds DNA) [N] = concentration (mg mL-1) of DNA The concentration of ds DNA [N] was calculated as [DNA] (mg mL-1) = A260/ε 260 [DNA] (μg mL-1) = A260 × 50 × dilution factor Purity of DNA was checked by measuring absorbance at 260 and 280 nm and calculating the A260/A280 ratio (Sambrook et al., 1989). A DNA sample was considered pure when A260/A280 ranged between 1.8-1.9. An A260/A280 < 1.7 indicated contamination of the DNA preparation with protein or aromatic substances such as phenol, while an A260/A230 < 2.0 indicated possible contamination of high molecular weight polyphenolic compounds like humic substances.
    45. Determination of DNA quantity and purity
    46. as well as commercial methods (MN kit, Germany; Mo-Bio kit, CA, USA; Zymo soil DNA kit, CA, USA) according to the manufacturer’s protocols and compared in terms of DNA yield and purity.
    47. The soil DNA from Pantnagar and Lonar soil samples were also extracted by various manual (Desai and Madamwar, 2007; Agarwal et al., 2001; Yamamoto et al., 1998
    48. Alternatively metagenomic DNA was extracted from the alkaline soil samples by using different commercial kits (UltraClean™, PowerSoil™ [Mo Bio Laboratories Inc., Carlsbad, CA, USA], Nucleospin kit [Macherey-Nagal, Germany] and Zymo soil DNA isolation kit [CA, USA]). The DNA was finally suspended in 100 μL of sterile Milli Q water for further analysis.
    49. Commercial kits
    50. Comparison of yield and purity of crude DNA
    51. Soil (1 gm) was suspended with 0.4 gm (w/w) polyactivated charcoal (Datta and Madamwar, 2006) and 20 μL proteinase K (10 mg mL-1) in 2 mL of modified extraction buffer [N,N,N,N cetyltrimethylammonium bromide (CTAB) 1% w/v, polyvinylpolypyrrolidone (PVPP) 2% w/v, 1.5 M NaCl, 100mM EDTA, 0.1 M TE buffer (pH 8.0), 0.1M sodium phosphate buffer (pH 8.0) and 100 μL RNaseA] [Zhou et al., 1996] in 20 mL centrifuge tubes to homogenize the sample and incubated at 37 °C for 15 min in an incubator shaker at 200 rpm. Subsequently, 200 μL of 10% SDS was added to the homogenate and kept at 60 °C for 2 h with intermittent shaking. DNA was precipitated by adding 0.5 V PEG 8000 (30 % in 1.6 M NaCl) and left at room temperature for an hour (Yeates et al., 1998). The precipitated DNA was collected by centrifugation at 8000 x g at 4 °C. The supernatant was discarded and pellet was dissolved in 1 mL of TE buffer (pH 8.0) and then100 μL of 5 M potassium acetate (pH 4.5) was added and incubated at 4 °C for 15 min. The supernatant was collected after centrifugation at 8000 x g and treated with equal volumes of phenol: chloroform (1:1) followed by chloroform: isoamylalcohol (24:1) at 8000 x g for 15 min
    52. PROTOCOL FOR OPTIMIZATION OF HUMIC ACID-FREE DNA FROM ALKALINE SOILS
    53. Various strains of Escherchia coli (DH5α, XL1Blue, DH10B) were used as hosts for the propagation of recombinant vectors. In addition, Bacillus subtilis was used as a host for the expression of xylanase gene from the recombinant vector pWHMxyl. Different vectors used in this investigation are listed in
    54. BACTERIAL STRAINS
    55. Soil, sediment, effluent, and water samples have been collected from various hot and alkaline regions of India and Japan in sterile polyethylene bags/bottles. The samples were transported to the laboratory and preserved at 4 °C. Temperature and pH of the samples was recorded.
    56. COLLECTION OF SAMPLES
    1. The CLD-J domain shares ~51 % similarity with the CDPK from Arabidopsis thaliana AtCPK-l. The homology model of CLD-JD was determined using Swiss Model from EMBL. The template model used was CLD-JD of AtCPK-1, which was crystallized as a dimer. The J -domain helices from the two monomers were swapped with each other in this structure (Chandran et aI., 2006). Therefore, the initial homology model generated for the complementary CLD-J domain for PfCDPK4 was also a dimer. To understand the interaction of this helix (Gln358_ Lys371) with CLP of the monomer, this helix was rotated and translated keeping residues 372-375 as the flexible linker region and superimposed on to the helix from the other monomer, which resulted in the initial model for the CLD-J domain monomer. Initially, these flexible linker residues (372-375) were locally minimized using COOT (Emsley and Cowtan, 2004), and the overall structure was refined with slow cooling using annealing of CNS (Brunger et aI., 1998) to remove all the short contacts. Finally, the model quality was checked with the Pro check software (Laskowski et aI., 1996). The homology model was generated with the help of Dr. S. Gaurinath, JNU
    2. Homology Modeling
    3. DAPI 01 ector Labs, USA), and stained parasites were visualized using Zeiss Axioimager fluorescence microscope and the images were processed using Axio Vision software
    4. Thin blood smears of parasite cultures were fixed with chilled methanol for 2 min. After air drying, washing with PBS and permeabilization was done with 0.05 % saponin in 3% BSA/PBS for 15 min, followed by blocking with 3% BSA made in PBS for Ih. Subsequent incubations with primary antibodies were performed for 2h at room temperature or at 4°C overnight. The smears were washed 3x5 times with PBS. The slides were then incubated with appropriate secondary antibodies (labeled either with fluorescein isothiocyanate (FITC) or Texas Red) for 1 hour at I room temperature. The slides were washed again with PBS and air dried in the dark. Smears were mounted in glycerol containing mounting media that contained
    5. mmunofluorescence Assay
    6. Gametocyte rich parasite lysate was prepared using lysis buffer containing phosphatase inhibitors (20IlM sodium fluoride, 20llM ~-glycerophosphate, and IOOIlM sodium vanadate). For some experiments, 2mM calcium or 2 mM EGTA was added to the lysis buffer. IOOllg of lysate protein was incubated with PfCDPK4 anti-sera (1:100 ratio) for 12 h at 4°C on an end-to-end shaker. Subsequently, 50 III of protein A+G-Sepharose (Amersham Biosciences) was added to the antibody-protein complex and incubated on an end-to-end shaker for 2 h. The beads were washed with phosphate-buffer saline three times at 4°C and were resuspended in kinase assay buffer that contained phosphatase inhibitors.