519 Matching Annotations
  1. Apr 2021
    1. Only the Starter Kit is available in this reboot. The Starter Kit is FREE, in order to distribute it as widely as possible. This goal of this Kickstarter campaign is to introduce Clash of Deck to the whole word and to bring a community together around the game. If the Kickstarter campaign succeeds, we will then have the necessary dynamic to publish additional paid content on a regular basis, to enrich the game with: stand-alone expansions, additional modules, alternative game modes..
  2. Mar 2021
    1. Nobody is unaware of Uber and Uber’s services. Hence, considering those services, any start-up entrepreneur, before thinking of starting a similar business, may have a doubt regarding the business model of Uber and revenue model of Uber
    1. Larremore, D. B., Wilder, B., Lester, E., Shehata, S., Burke, J. M., Hay, J. A., Tambe, M., Mina, M. J., & Parker, R. (2020). Test sensitivity is secondary to frequency and turnaround time for COVID-19 surveillance. MedRxiv, 2020.06.22.20136309. https://doi.org/10.1101/2020.06.22.20136309

    1. Baker, C. M., Campbell, P. T., Chades, I., Dean, A. J., Hester, S. M., Holden, M. H., McCaw, J. M., McVernon, J., Moss, R., Shearer, F. M., & Possingham, H. P. (2020). From climate change to pandemics: Decision science can help scientists have impact. ArXiv:2007.13261 [Physics]. http://arxiv.org/abs/2007.13261

    1. Gupta, R. K., Marks, M., Samuels, T. H. A., Luintel, A., Rampling, T., Chowdhury, H., Quartagno, M., Nair, A., Lipman, M., Abubakar, I., Smeden, M. van, Wong, W. K., Williams, B., & Noursadeghi, M. (2020). Systematic evaluation and external validation of 22 prognostic models among hospitalised adults with COVID-19: An observational cohort study. MedRxiv, 2020.07.24.20149815. https://doi.org/10.1101/2020.07.24.20149815

  3. Feb 2021
    1. Intuitively, you understand the flow just by looking at the BPMN diagram. And, heck, we haven’t even discussed BPMN or any terminology, yet!
    1. Around 2 years ago I decided to end the experiment of “TRB PRO” as I felt I didn’t provide enough value to paying users. In the end, we had around 150 companies and individuals signed up, which was epic and a great funding source for more development.
    2. We’re now relaunching PRO, but instead of a paid chat and (never existing) paid documentation, your team gets access to paid gems, our visual editor for workflows, and a commercial license.
    1. We use a subset of BPMN for the visual language in the editor, but added our own set of restrictions and semantics to it.
    1. Business Process Model and Notation (BPMN) is a standard for business process modeling that provides a graphical notation for specifying business processes in a Business Process Diagram (BPD),[3] based on a flowcharting technique very similar to activity diagrams from Unified Modeling Language (UML).
  4. Jan 2021
  5. Dec 2020
    1. INTERACTION OF GROUND WATER AND STREAMS

      gambar model yang sederhana. akan bagus kalau kita dapat menggambarkan sendiri (walaupun hanya dengan tangan) interaksi yang sama di daerah kita.

    1. Eyal describes the theory called The Fogg Behavior Model which states that for a behavior (B) to occur, three things must be present at the same time: motivation (M), ability (A), and a trigger (T). More succinctly, B = MAT.

      Fogg Behavior Model says that for a Behavior (B) to occur 3 things have to be present at the same time:

      1. Motivation (M)
      2. Ability (A)
      3. Trigger (T)

      B = MAT

    1. Better community building: At the moment, MDN content edits are published instantly, and then reverted if they are not suitable. This is really bad for community relations. With a PR model, we can review edits and provide feedback, actually having conversations with contributors, building relationships with them, and helping them learn.
    2. Better contribution workflow: We will be using GitHub’s contribution tools and features, essentially moving MDN from a Wiki model to a pull request (PR) model. This is so much better for contribution, allowing for intelligent linting, mass edits, and inclusion of MDN docs in whatever workflows you want to add it to (you can edit MDN source files directly in your favorite code editor).
  6. Nov 2020
    1. Linear mixed models are an extension of simple linear models to allow both fixed and random effects, and are particularly used when there is non independence in the data, such as arises from a hierarchical structure
    1. We love dbt because of the values it embodies. Individual transformations are SQL SELECT statements, without side effects. Transformations are explicitly connected into a graph. And support for testing is first-class. dbt is hugely enabling for an important class of users, adapting software engineering principles to a slightly different domain with great ergonomics. For users who already speak SQL, dbt’s tooling is unparalleled.

      when using [[dbt]] the [[transformations]] are [[SQL statements]] - already something that our team knows

    1. We then estimate the relative weight each touch played in leading to a conversion. This estimation is done by allocating “points” to touches: each conversion is worth exactly one point, and that point is divvied up between the customer’s touches. There are four main ways to divvy up this point:First touch: Attribute the entire conversion to the first touchLast touch: Attribute the entire conversion to the last touchForty-twenty-forty: Attribute 40% (0.4 points) of the attribution to the first touch, 40% to the last touch, and divide the remaining 20% between all touches in betweenLinear: Divide the point equally among all touches

      [[positional attribution]] works by identifying the touch points in the lifecycle, and dividing up the points across those touches.

      There are four main ways to divvy up this pointing

      [[question]] What are the four main ways to divvy up positional attribution]]

      • [[first touch]]
      • [[last touch]]
      • [[fourty-twenty-fourty]]
      • [[linear]]
    2. Once you have pageviews in your warehouse, you’ll need to do two thingsSessionization: Aggregate these pageviews into sessions (or “sessionization”) writing logic to identify gaps of 30 minutes or more.User stitching: If a user first visits your site without any identifying information (typically a `customer_id` or `email`), and then converts at a later date, their previous (anonymous) sessions should be updated to include their information. Your web tracking system should have a way to link these sessions together.This modeling is pretty complex, especially for companies with thousands of pageviews a day (thank goodness for incremental models 🙌). Fortunately, some very smart coworkers have written packages to do the heavy lifting for you, whether your page views are tracked with Snowplow, Segment or Heap. Leverage their work by installing the right package to transform the data for you.

      [[1. Gather your required data sources]] - once we have data, we need to do two things [[sessionization]] - the aggregation of pageviews / etc into a session

      and [[user stitching]] - when we have a user without any identifying information, and then converts - kind of like the anonymous users / signups - and trying to tie them back to a source

    3. 1. Gather your required data sourcesSessions:Required dbt techniques: packagesWe want to use a table that represents every time a customer interacts with our brand. For ecommerce companies, the closest thing we can get to for this is sessions. (If you’re instead working for a B2B organization, you should consider using a table of interactions between your sales team and a potential customer from your CRM).Sessions are discrete periods of activity by a user on a website. The industry standard is to define a session as a series of activities followed by a 30-minute window without any activity.

      [[1. Gather your required data sources]]

    4. How to build an attribution model

      [[How to build an attribution model]]

      • [[1. Gather your required data sources]]
      • [[2. Find all sessions before conversion]]
      • [[3. Calculate the total sessions and the session index]]
      • [[3. Allocate points]]
      • [[4. Bonus Join in revenue value]]
      • [[5. Bonus Join with ad spend data]]
      • [[6. Ship it!]]
    5. The attribution data modelIn reality, it’s impossible to know exactly why someone converted to being a customer. The best thing that we can do as analysts, is provide a pretty good guess. In order to do that, we’re going to use an approach called positional attribution. This means, essentially, that we’re going to weight the importance of various touches (customer interactions with a brand) based on their position (the order they occur in within the customer’s lifetime).To do this, we’re going to build a table that represents every “touch” that someone had before becoming a customer, and the channel that led to that touch.

      One of the goals of an [[attribution data model]] is to understand why someone [[converted]] to being a customer. This is impossible to do accurately, but this is where analysis comes in.

      There are some [[approaches to attribution]], one of those is [[positional attribution]]

      [[positional attribution]] is that we are weighting the importance of touch points - or customer interactions, based on their position within the customer lifetime.

    6. transparent attribution model. You’re not relying on vendor logic. If your sales team feels like your attribution is off, show them dbt docs, walk them through the logic of your model, and make modifications with a single line of SQL

      [[transparent attribution model]]

    7. The most flexible attribution model. You own the business logic and you can extend it however you want, and change it easily when you business changes

      [[flexible attribution model]]

    8. hat’s it. Really! By writing SQL on top of raw data you get: The cheapest attribution model. This playbook assumes you’re operating within a modern data stack , so you already have the infrastructure that you need in place: You’re collecting events data with a tool like Snowplow or Segment (though Segment might get a little pricey) You’re extracting data from ad platforms using Stitch or Fivetran You’re loading data into a modern, cloud data warehouse like Snowflake, BigQuery, or Redshift And you’re using dbt so your analysts can model data in SQL

      [[cheapest attribution model]]

    9. So what do you actually need to build an attribution model?Raw data in your warehouse that represents customer interactions with your brand. For ecommerce companies, this is website visits. For B2B customers, it might be conversations with sales teams.SQL

      to build an [[attribution model]] we need the raw data - this raw data should capture the [[customer interactions]], and in our case - also partner interactions, or people working with the partner?

    1. This is addressing a security issue; and the associated threat model is "as an attacker, I know that you are going to do FROM ubuntu and then RUN apt-get update in your build, so I'm going to trick you into pulling an image that ​_pretents_​ to be the result of ubuntu + apt-get update so that next time you build, you will end up using my fake image as a cache, instead of the legit one." With that in mind, we can start thinking about an alternate solution that doesn't compromise security.
  7. Oct 2020
    1. In order to inform the development and implementation of effective online learning environments, this study was designed to explore both instructors' and students' online learning experiences while enrolled in various online courses. The study investigated what appeared to both support and hinder participants' online teaching and learning experiences.

      The authors discuss the issue of community and engagement in online graduate programs. They carried out a small case study and used a Cognitive Apprenticeship Model to examine a successful program in Higher Education. They found that students feel too many online classes are just reading and writing, regurgitating rather than applying, and lack sufficient connection with the instructor and with other students, They recommend some strategies to fix that, but admit that more work is needed. 9/10

    1. The educator’s role in self-directed learning

      Fostering self-directed learning through strategy is discussed by Bailey et al. (2019) in chapter 1 of “Self-Directed Learning for the 21st Century: Implications for Higher Education.” The authors review the changing role of the educator and the learner based on respective self-directed teaching strategies (problem-based learning, cooperative learning, process-oriented learning) and the learner’s propensity for self-directed learning. In addition to providing principles to promote self-directed learning, the Grow and Borich models for implementing said learning were briefly reviewed. 8/10

    1. Cognitive Presence “is the extent to which learners are able to construct and confirm meaning through sustained reflection and discourse” (Community of Inquiry, n.d, para. 5). Video is often used as a unidirectional medium with information flowing from the expert or instructor to the learner. To move from transmission of content to construction of knowledge, tools such as Voice Thread (VoiceThread, 2016) support asynchronous conversation in a multimedia format.

      The author, Kendra Grant, is the Director of Professional Development and Learning for Quillsoft in Toronto Canada. Grant helps business succeed in education design and support. In this article Grant discusses how quickly the learning environment has changed through technological development. Grant explores the RAT Model, which guides instructors in the "use of technology to help transform instructional practice." Grant then examines the Community of Inquiry model, which seeks to create meaningful instruction through social, cognitive and teaching presence. Grant concludes by providing general principles for creating a positive video presence.

      Rating: 8/10

    1. virtual-dom exposes a set of objects designed for representing DOM nodes. A "Document Object Model Model" might seem like a strange term, but it is exactly that. It's a native JavaScript tree structure that represents a native DOM node tree.
  8. Sep 2020
    1. BPMN Viewer and Editor Use bpmn-js to display BPMN 2.0 diagrams on your website. Embed it as a BPMN 2.0 web modeler into your applications and customize it to suit your needs.
    1. mongoose.model

      mongoose.model()

      When you call mongoose.model() on a schema, Mongoose compiles a model for you. The first argument is the singular name of the collection your model is for. Mongoose automatically looks for the plural, lowercased version of your model name. https://mongoosejs.com/docs/models.html#compiling

  9. Aug 2020
    1. Candido, D. S., Claro, I. M., Jesus, J. G. de, Souza, W. M., Moreira, F. R. R., Dellicour, S., Mellan, T. A., Plessis, L. du, Pereira, R. H. M., Sales, F. C. S., Manuli, E. R., Thézé, J., Almeida, L., Menezes, M. T., Voloch, C. M., Fumagalli, M. J., Coletti, T. M., Silva, C. A. M. da, Ramundo, M. S., … Faria, N. R. (2020). Evolution and epidemic spread of SARS-CoV-2 in Brazil. Science. https://doi.org/10.1126/science.abd2161

    1. The RAT model sees software development as an off-line program-construction activity composed of these parts: defining, decomposing, estimating, implementing, assembling, and finishing

      This is what can lead to the 'there is only version 1.0' problem - and improvements / iterations fall to the sidelines.

      This can have a number of consequences

      • over designed / engineered
      • doing unnecessary work
      • lack of user feedback and ability to accommodate it
      • rigid / fragile architecture
    1. Kreye, J., Reincke, S. M., Kornau, H.-C., Sánchez-Sendin, E., Corman, V. M., Liu, H., Yuan, M., Wu, N. C., Zhu, X., Lee, C.-C. D., Trimpert, J., Höltje, M., Dietert, K., Stöffler, L., Wardenburg, N. von, Hoof, S. van, Homeyer, M. A., Hoffmann, J., Abdelgawad, A., … Prüss, H. (2020). A SARS-CoV-2 neutralizing antibody protects from lung pathology in a COVID-19 hamster model. BioRxiv, 2020.08.15.252320. https://doi.org/10.1101/2020.08.15.252320

    1. Malani, A., Soman, S., Asher, S., Novosad, P., Imbert, C., Tandel, V., Agarwal, A., Alomar, A., Sarker, A., Shah, D., Shen, D., Gruber, J., Sachdeva, S., Kaiser, D., & Bettencourt, L. M. A. (2020). Adaptive Control of COVID-19 Outbreaks in India: Local, Gradual, and Trigger-based Exit Paths from Lockdown (Working Paper No. 27532; Working Paper Series). National Bureau of Economic Research. https://doi.org/10.3386/w27532

  10. Jul 2020
    1. At the substitution level, you are substituting a cup of coffee that we could make at home or school with a cup of coffee from Starbucks. It’s still coffee: there’s no real change.

      Love this example with one of my favorite things: coffee! Having these examples are very helpful to me, this article not only provides examples, though, it explains why they are examples of each

    2. The SAMR model allows you the opportunity to evaluate why you are using a specific technology, design tasks that enable higher-order thinking skills, and engage students in rich learning experiences.

      Clearly stated purpose of the SAMR model!

  11. Jun 2020
    1. Facebook already harvests some data from WhatsApp. Without Koum at the helm, it’s possible that could increase—a move that wouldn’t be out of character for the social network, considering that the company’s entire business model hinges on targeted advertising around personal data.
    1. epsilon. Is a very small number to prevent any division by zero in the implementation (e.g. 10E-8). Further, learning rate decay can also be used with Adam. The paper uses a decay rate alpha = alpha/sqrt(t) updted each epoch (t) for the logistic regression demonstration. The Adam paper suggests: Good default settings for the tested machine learning problems are alpha=0.001, beta1=0.9, beta2=0.999 and epsilon=10−8 The TensorFlow documentation suggests some tuning of epsilon: The default value of 1e-8 for epsilon might not be a good default in general. For example, when training an Inception network on ImageNet a current good choice is 1.0 or 0.1. We can see that the popular deep learning libraries generally use the default parameters recommended by the paper. TensorFlow: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08. Keras: lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0. Blocks: learning_rate=0.002, beta1=0.9, beta2=0.999, epsilon=1e-08, decay_factor=1. Lasagne: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08 Caffe: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08 MxNet: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-8 Torch: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-8

      Should we expose EPS as one of the experiment parameters? I think that we shouldn't since it is a rather technical parameter.

  12. May 2020
    1. Given the disjoint vocabularies (Section2) andthe magnitude of improvement over BERT-Base(Section4), we suspect that while an in-domainvocabulary is helpful, SCIBERTbenefits mostfrom the scientific corpus pretraining.

      The specific vocabulary only slightly increases the model accuracy. Most of the benefit comes from domain specific corpus pre-training.

    2. We construct SCIVOCAB, a new WordPiece vo-cabulary on our scientific corpus using the Sen-tencePiece1library. We produce both cased anduncased vocabularies and set the vocabulary sizeto 30K to match the size of BASEVOCAB. The re-sulting token overlap between BASEVOCABandSCIVOCABis 42%, illustrating a substantial dif-ference in frequently used words between scien-tific and general domain texts

      For SciBERT they created a new vocabulary of the same size as for BERT. The overlap was at the level of 42%. We could check what is the overlap in our case?

    1. Although we could have constructed new WordPiece vocabulary based on biomedical corpora, we used the original vocabulary of BERTBASE for the following reasons: (i) compatibility of BioBERT with BERT, which allows BERT pre-trained on general domain corpora to be re-used, and makes it easier to interchangeably use existing models based on BERT and BioBERT and (ii) any new words may still be represented and fine-tuned for the biomedical domain using the original WordPiece vocabulary of BERT.

      BioBERT does not change the BERT vocabulary.

    1. def _tokenize(self, text): split_tokens = [] if self.do_basic_tokenize: for token in self.basic_tokenizer.tokenize(text, never_split=self.all_special_tokens): for sub_token in self.wordpiece_tokenizer.tokenize(token): split_tokens.append(sub_token) else: split_tokens = self.wordpiece_tokenizer.tokenize(text) return split_tokens

      How BERT tokenization works

    1. My initial experiments indicated that adding custom words to the vocab-file had some effects. However, at least on my corpus that can be described as "medical tweets", this effect just disappears after running the domain specific pretraining for a while. After spending quite some time on this, I have ended up dropping the custom vocab-files totally. Bert seems to be able to learn these specialised words by tokenizing them.

      sbs experience from extending the vocabulary for medical data

    2. Since Bert does an excellent job in tokenising and learning this combinations, do not expect dramatic improvements by adding words to the vocab. In my experience adding very specific terms, like common long medical latin words, have some effect. Adding words like "footballs" will likely just have negative effects since the current vector is already pretty good.

      Expected improvement of extending the BERT vocabulary

    1. As is the case in NLP applications in general, we begin by turning each input word into a vector using an embedding algorithm.

      What is the embedding algorithm for BERT?

    1. :It is fairly expensive (four days on 4 to 16 Cloud TPUs), but is a one-time procedure for each language

      Estimates on the model pre-training from scratch

    1. I do not understand what is the threat model of not allowing the root user to configure Firefox, since malware could just replace the entire Firefox binary.
  13. Apr 2020