243 Matching Annotations
  1. Jul 2022
    1. Computer science is the subject that studies what computers can do and investigates the best ways you can solve the problems of the world with them. It is a huge field overlapping pure mathematics, engineering and many other scientific disciplines. In this video I summarise as much of the subject as I can and show how the areas are related to each other. #computer #science #DomainOfScience
    1. An instance may be given of the necessity of the “ separate sheet ” system.Among the many sources of information from which we constructed our bookThe Manor and the Borough were the hundreds of reports on particular boroughsmade by the Municipal Corporation Commissioners in 1835 .These four hugevolumes are well arranged and very fully indexed; they were in our own possession;we had read them through more than once; and we had repeatedly consulted themon particular points. We had, in fact, used them as if they had been our own boundnotebooks, thinking that this would suffice. But, in the end, we found ourselvesquite unable to digest and utilise this material until we had written out every oneof the innumerable facts on a separate sheet of paper, so as to allow of the mechanicalabsorption of these sheets among our other notes; of their complete assortment bysubjects; and of their being shuffled and reshuffled to test hypotheses as to suggestedco-existences and sequences.

      Webb's use case here sounds like she's got the mass data, but that what she really desired was a database which she could more easily query to do her work and research. As a result, she took the flat file data and made it into a manually sortable and searchable database.

    1. Adversely, the Topics feature did not seem super helpful which was surprising because I initially thought that this feature would be helpful, but it just did not seem super relevant or accurate. Maybe this is because as a work of literature, the themes of the play are much more symbolic and figurative than the literal words that the play uses. Perhaps this function would work better for text that is more nonfiction based, or at least more literal. 

      I read your Voyant analysis of Henrik Ibsen's "A Doll's House," and I think we almost pick the same tools that we believe to be crucial for our text analysis. Like you, I mostly visualize my chosen literary work with Cirrus, Terms, Berry, and Trends. I also use links to ?look into how these words are used interdependently to contextualize the story told. I also had difficulty understanding how functions like Topics would benefit my understanding of the texts on a layered and complex level. I checked and thought maybe the problem was with the word count of the document. By default setting, Topics generates the first 1000 words in a document, and A Doll's House has 26210 words. In order to use this tool in the most efficient way possible, you can try to use the Topics slider ( the scroll bar) to adjust the number of topics you want to generate (max is 200). I have read A Doll's House before, so I couldn't speak for those who haven't. However, the clusters of chosen terms hint to me that this fiction deals with bureaucracy and finance via repeated words like "works," "money," and "paper." I can also recognize some words classified as names, so many characters are involved in the story. There is also a vague clue of the story's setting, which is during the winter season, from the repetition of the word "Christmas." It appears that someone is getting angry at someone for their wrongdoings, and this drama occurs in a family. While Topics cannot give me a complete storyline, it gives me a good chunk of puzzles to piece together the core gist of the story. It happened to me when I analyzed Herman Melville's Moby Dick. Words like "whale," "sea," "sailor," and "chase" allowed me to make a reasonable assumption that there was a group of sailors that went after a giant whale in the sea. I still prefer to use other tools, but that was how I utilized Topics for my knowledge of the text. I agree that text with more literal content, like self-help books, would definitely yield better results with Voyant Tools' Topics.

  2. Jun 2022
    1. Thus flexibility is an important virtue in computer-assisted textual analysis, and testing a project on a subset of texts or methods can avoid wasted effort.

      Flexibility has almost become a sought-after characteristics of any projects ever conducted in this world, let alone those that belong to the school of humanities. Any individual or group entering a long-term project should be aware that predicting the outcome of the project is never a part of their project. It's impossible to identify and avoid surprise factors on a long road, but it's definitely possible to have an open mindset that's ready fpr any difficulty coming along the way and for brainstorming solutions that resolve this "shock". In many cases, these unexpected variables are what that renders the project memorable and special and sustainable and valid and reliable. In many cases, changing the initial direction of the project when faced with these unforeseen elements are for the better and produce even better results. Testing out different methods on textual analysis is a particularly great advice for those who are bound to carry a project in the coming future.

  3. May 2022
    1. In explaining his approach, Luhmann emphasized, with the first stepsof computer technology in mind, the benefits of the principle of “multiple storage”: in the card index itserves to provide different avenues of accessing a topic or concept since the respective notes may be filedin different places and different contexts. Conversely, embedding a topic in various contexts gives rise todifferent lines of information by means of opening up different realms of comparison in each case due tothe fact that a note is an information only in a web of other notes. Furthermore it was Luhmann’s intentionto “avoid premature systematization and closure and maintain openness toward the future.”11 His way oforganizing the collection allows for it to continuously adapt to the evolution of his thinking and his overalltheory which as well is not conceptualized in a hierarchical manner but rather in a cybernetical way inwhich every term or theoretical concept is dependent on the other.

      While he's couching it in the computer science milieu of his day, this is not dissimilar to the Llullan combinatorial arts.

    1. Robert Fenton, Electrical and Computer Engineering Professor Emeritus, pioneered the technology for the first wave of self-driving cars.

      I had Fenton for a class once and during a lecture he asked a question of the class. A student raised his hand and answered. Professor Fenton listened and asked the class "Does anyone else agree that his answer is correct?"

      About 85% of the students in the large lecture hall raised their hands.

      He paused, shook his head, and said "Well, then I'm afraid you're all going to fail." Then he turned around and went back to writing on the chalkboard.

  4. Apr 2022
    1. In his manuscript, Harrison spoke of machina with respect to his filing cabinet and named his invention ‘Ark of Studies’. In rhetorical culture, ‘ark’ had been a metaphor that, among many others, denoted the virtual store-house that orators stocked with vivid images of memorable topics (res) and words (verba). In Harrison’s manuscript, ‘ark’ instead became a synonym for ‘mechanical’ memory. In turn, in the distinction between natural and artificial memory, consciousness was compelled to leave its place and to shift to the op-posing side.

      Thomas Harrison used the word machina to describe his 'Ark of Studies', a filing cabinet for notes and excerpts from other works. This represents part of a discrete and very specific change on the continuum of movement from the ars memoria (artificial memory) to the ars excerptendi (note taking). Within the rhetorical tradition relying on creating memorable images for topics (res) and words (verba) the idea of an ark was often used as a memory palace as seen in Hugh of St. Victor's De arca Noe mystica, or ‘‘The Ark of Noah According to the Spiritual Method of Reading" (1125–30). It starts the movement from natural and artificial memory to a form of external and mechanical memory represented by his physical filing cabinet.

      Reference Yates and Carruthers for Hugh of St. Victor.

    1. An alternative definition for computer science, then, is to say that computer science is the study of problems that are and that are not computable, the study of the existence and the nonexistence of algorithms.

      definition of computer science

    2. Computer science is the study of problems, problem-solving, and the solutions that come out of the problem-solving process. Given a problem, a computer scientist’s goal is to develop an algorithm, a step-by-step list of instructions for solving any instance of the problem that might arise. Algorithms are finite processes that if followed will solve the problem. Algorithms are solutions.

      Computer science definition

  5. Mar 2022
    1. Exercises


      Counterexample: \(\to := {(a, c), (b, c)}\)


      \(a \to b\) iff \(a\) encodes Turing machine \(M_a\) and \(b\) encodes a valid terminating computation (sequence of states) of \(M_a\).


      Let \(|w|_a := \varphi_a(w)\).

      \(\varphi(w) := 3^{|w|_a} 2^{|w|_b}\)


      1. Let \(u \to_1 v\). Then \(\varphi(v) = 3^{|v|_a} 2^{|v|_b} = 3^{|u|_a+1} 2^{|u|_b-2} = 3^{|u|_a} 2^{|u|_b} \frac{3}{4} = \varphi(u) \frac{3}{4} < \varphi(u)\).
      2. Let \(u \to_2 v\). Then \(\varphi(v) = 3^{|v|_a} 2^{|v|_b} = 3^{|u|_a-1} 2^{|u|_b+1} = 3^{|u|_a} 2^{|u|_b} \frac{2}{3} = \varphi(u) \frac{2}{3} < \varphi(u)\).



      Let \(a > b\). Then \([b^n a | n \in [0, 1, \ldots]]\) is an infinite chain according to \(>_{Lex}\).

      Note: This exercise completes the discussion of Lemma 2.4.3.


      Let \(s, t\) be terms. Run BFS from \(s\) using \(\leftrightarrow^E\). If \(t\) is encountered, conclude that \(s \approx_E t\). If the BFS finishes enumerating the equivalence class without encountering \(t\), conclude that \(\lnot s \approx_E t\).


      Let \(x \in Var(r) \setminus Var(l)\). Let \(p\) be a position of \(x\) in \(r\).

      Infinite chain:

      • \(t_0 = x\)
      • \(t_{i+1} = r[t_i]_p\)


      1. a
        • Unifier: \({x \to h(a), y \to h(a)}\)
        • Matcher: \({x \to h(a), y \to x}\)
      2. b
        • Unifier: Unsolvable
        • Matcher: \({x \to h(x), y \to x}\)
      3. c
        • Unifier: \({x \to h(y), z \to b}\)
        • Matcher: Unsolvable
      4. d
        • Unifier: Unsolvable
        • Matcher: Unsolvable


      Counterexample TRS \(R\):

      1. \(a \to b\)
      2. \(b \to b\)
    1. En somme, les études sur la communication des élèves atteints d’autisme permettent de mettre en évidence l’importance d’un contexte riche en stimulations appropriées (sons et images), mais également une évidente « stabilité » de l’information à décoder, le suivi des émotions des personnages, le rôle de l’imitation dans les apprentissages. Ces résultats encouragent donc l’usage d’outils informatiques adéquats pour améliorer la communication sociale chez les enfants atteints d’autisme.

      L'association de deux sujets qui n'ont pas de corrélation vérifiéé, revient dans la conclusion en contradiction avec la conclusion de l'étude de Ramdoss, S et al.

    2. Nous allons montrer par une courte analyse de quelques études l’impact du travail éducatif informatisé dans l’apprentissage de la communication sociale chez des enfants atteints d’autisme.

      En contradiction avec l'hypothèse :

      Results suggest that CBI should not yet be considered a researched-based approach to teaching communication skills to individuals with ASD. However, CBI does seem a promising practice that warrants future research. Les résultats suggèrent que le CBI ne devrait pas encore être considéré comme un approche fondée sur la recherche pour enseigner les compétences en communication aux personnes ayant Troubles du Spectre Autistique. Cependant, le CBI semble être une pratique prometteuse qui justifie des recherches futures.

  6. Feb 2022
    1. 9/8g Hinter der Zettelkastentechnik steht dieErfahrung: Ohne zu schreiben kann mannicht denken – jedenfalls nicht in anspruchsvollen,selektiven Zugriff aufs Gedächtnis voraussehendenZusammenhängen. Das heißt auch: ohne Differenzen einzukerben,kann man nicht denken.

      Google translation:

      9/8g The Zettelkasten technique is based on experience: You can't think without writing—at least not in contexts that require selective access to memory.

      That also means: you can't think without notching differences.

      There's something interesting about the translation here of "notching" occurring on an index card about ideas which can be linked to the early computer science version of edge-notched cards. Could this have been a subtle and tangential reference to just this sort of computing?

      The idea isn't new to me, but in the last phrase Luhmann tangentially highlights the value of the zettelkasten for more easily and directly comparing and contrasting the ideas on two different cards which might be either linked or juxtaposed.

      Link to:

      • Graeber and Wengrow ideas of storytelling
      • Shield of Achilles and ekphrasis thesis

      • https://hypothes.is/a/I-VY-HyfEeyjIC_pm7NF7Q With the further context of the full quote including "with selective access to memory" Luhmann seemed to at least to make space (if not give a tacit nod?) to oral traditions which had methods for access to memories in ways that modern literates don't typically give any credit at all. Johannes F.K .Schmidt certainly didn't and actively erased it in Niklas Luhmann’s Card Index: The Fabrication of Serendipity.

    1. "Context" manipulation is one of big topic and there are many related terminologies (academic, language/implementation specific, promotion terminologies). In fact, there is confusing. In few minutes I remember the following related words and it is good CS exam to describe each :p Thread (Ruby) Green thread (CS terminology) Native thread (CS terminology) Non-preemptive thread (CS terminology) Preemptive thread (CS terminology) Fiber (Ruby/using resume/yield) Fiber (Ruby/using transfer) Fiber (Win32API) Generator (Python/JavaScript) Generator (Ruby) Continuation (CS terminology/Ruby, Scheme, ...) Partial continuation (CS terminology/ functional lang.) Exception handling (many languages) Coroutine (CS terminology/ALGOL) Semi-coroutine (CS terminology) Process (Unix/Ruby) Process (Erlang/Elixir) setjmp/longjmp (C) makecontext/swapcontext (POSIX) Task (...)
    1. To satisfy the architecture of a modern process, a space sepa-rate from the usual library business is furnished, a catalog room or working memory for a central bibliographic unit. In this CBU, the program pro-cesses data contributed by various paths.

      Note here how the author creates the acronym CBU out of central bibliographic unit as a means of creating a connection to computer jargon like CPU (central processing unit). I suspect that CBU was not an acronym used at the time.




  7. Jan 2022
    1. Budak, C., Soroka, S., Singh, L., Bailey, M., Bode, L., Chawla, N., Davis-Kean, P., Choudhury, M. D., Veaux, R. D., Hahn, U., Jensen, B., Ladd, J., Mneimneh, Z., Pasek, J., Raghunathan, T., Ryan, R., Smith, N. A., Stohr, K., & Traugott, M. (2021). Modeling Considerations for Quantitative Social Science Research Using Social Media Data. PsyArXiv. https://doi.org/10.31234/osf.io/3e2ux

    1. Here, the card index func-tions as a ‘thinking machine’,67 and becomes the best communication partner for learned men.68

      From a computer science perspective, isn't the index card functioning like an external memory, albeit one with somewhat pre-arranged linked paths? It's the movement through the machine's various paths that is doing the "thinking". Or the user's (active) choices that create the paths creates the impression of thinking.

      Perhaps it's the pre-arranged links where the thinking has already happened (based on "work" put into the system) and then traversing the paths gives the appearance of "new" thinking?

      How does this relate to other systems which can be thought of as thinking from a complexity perspective? Bacteria perhaps? Groups of cells acting in concert? Groups of people acting in concert? Cells seeing out food using random walks? etc?

      From this perspective, how can we break out the constituent parts of thought and thinking? Consciousness? With enough nodes and edges and choices of paths between them (or a "correct" subset of paths) could anything look like thinking or computing?

  8. Dec 2021
    1. Jacob Leupold, Theatrum machinarum. Theatrum arithmetico-geometricum, Das ist: Schau-Platz der Rechnen- und Meß-Kunst, vol. 7 (Leipzig, 1727)

      Reference that discusses calculating machines and information processors.

    2. It is telling that during the same period in which Harrison invented his Ark of Studies, the first calculating machines were tested in Europe: the famous cista mathematica by Athanasius Kircher, the or-ganum mathematicum by Kaspar Schott, and the cistula by Gottfried Wilhelm Leibniz.

      Keep in mind that Leibniz actually had a version of Harrison's cabinet in his possession. (cf. Paper Machines)

    3. Through an inner structure of recursive links and semantic pointers, a card index achieves a proper autonomy; it behaves as a ‘communication partner’ who can recommend unexpected associations among different ideas. I suggest that in this respect pre-adaptive advances took root in early modern Europe, and that this basic requisite for information pro-cessing machines was formulated largely by the keyword ‘order’.

      aliases for "topical headings": headwords keywords tags categories

    4. In § 3, I explain that to have a life of its own, a card index must be provid-ed with self-referential closure.

      In order to become a free-standing tool, the card index needed to have self-referential closure.

      This may have been one of the necessary steps for the early ideas behind computers. In addition to the idea of a clockwork universe, the index card may have been a step towards early efforts at creating the modern computer.

    1. computer engineering, microarchitecture, also called computer organization and sometimes abbreviated as µarch or uarch, is the way a given instruction set architecture (ISA) is implemented in a particular processor.[1] A given ISA may be implemented with different microarchitectures;[2][3] implementations may vary due to different goals of a given design or due to shifts in technology.[4]

      Microarchitecture (µarch) What Does Microarchitecture (µarch) Mean? Microarchitecture, abbreviated as µarch or uarch, is the fundamental design of a microprocessor. It includes the technologies used, resources and the methods by which the processor is physically designed in order to execute a specific instruction set (ISA or instruction set architecture). Simply put, it is the logical design of all electronic components and data paths present in the microprocessor, laid out in a specific way that it allows for optimal execution of instructions. In academe this is called computer organization.


      Techopedia Explains Microarchitecture (µarch) Microarchitecture is the logical representation of how a microprocessor is designed so that the interconnections between components – the control unit, the arithmetic logic unit, registers and others – interact in an optimized manner. This includes how buses, the data pathways between components, are laid out to dictate the shortest paths and proper connections. In modern microprocessors there are often several layers to deal with complexity. The basic idea is to lay out a circuit that could execute commands and operations that are defined in an instruction set.

      A technique that is currently used in microarchitecture is the pipelined datapath. It is a technique that allows a form of parallelism that is applied in data processing by allowing several instructions to overlap in execution. This is done by having multiple execution pipelines that run in parallel or close to parallel.

      Execution units are also a crucial aspect of microarchitecture. Execution units perform the operations or calculations of the processor. The choice of the number of execution units, their latency and throughput is a central microarchitectural design consideration. The size, latency, throughput and connectivity of memories within the system are also microarchitectural decisions.

      Another part of a microarchitecture is system-level design. This includes decisions on performance such as level and connectivity of input, as well as output and I/O devices.

      Microarchitectural design pays closer attention to restrictions than capability. A microarchitecture design decision directly affects what goes into a system; it heeds to issues such as:

      Performance Chip area/cost Logic complexity Ease of debugging Testability Ease of connectivity Power consumption Manufacturability A good microarchitecture is one that caters to all of these criteria.

    1. In general, an ISA defines the supported instructions, data types, registers, the hardware support for managing main memory, fundamental features (such as the memory consistency, addressing modes, virtual memory), and the input/output model of a family of implementations of the ISA.

      Instruction Set Architecture defines all logical steps (performed by their corresponding digital logical design hardware) which realizing all computing tasks facilitating our life.

  9. Nov 2021
    1. What is Amazon Renewed? Amazon Renewed is your trusted destination for a huge selection of smartphones, computers, video games, power tools, and even more products that work and look like new and are backed by the Amazon Renewed Guarantee.


  10. Oct 2021
  11. Sep 2021
    1. A mental model is what the user believes about the system at hand.

      “Mental models are one of the most important concepts in human–computer interaction (HCI).”

      — Nielsen Norman Group

  12. Aug 2021
  13. Jul 2021
    1. 中低端3.0顯示卡插到 2.0主機板上 沒什麼問題!PCI-E是一個序列介面標準,分1.0,2.0,3.0每個版本之間的區別只是頻寬不同,也就是速度差異,他們向下相容,也就是3.0相容2.0相容1.0後面的X16的意思是16個通道,因為他是序列匯流排,所以顯示的就是多少倍。現行的有X1 X4 X8 X16比較常見,從效能影響來說,如果你的是PCI-E 2.0 X16的介面那麼頻寬已經足以滿足現在的顯示卡使用了,所以基本沒有效能損失,如果是中低端的顯示卡PCI-E 1.0 X16都不會有任何效能損失。最新的主機板都是PCI-E 3.0的了,但是其實根本用不上,除非發燒級雙核心顯示卡才會用到如此高的頻寬。現在PCI-E 3.0 / 2.0的顯示卡 插到主機板的 PCI-E 3.0 X16 或 PCI-E 2.0 X16插槽上, 基本上看不出有什麼影響,效能影響微乎其微,除了那些最頂級的顯示卡以外。反倒是假如 把顯示卡插入了主機板的 PCI-E 2.0 X4 插槽的話,顯示卡效能會降低20%左右。所以顯示卡是2.0 還是3.0 這個無所謂,關鍵還是主機板是 X16倍速、還是X4倍速。
  14. Jun 2021
    1. Running time is asymptotically tight bound. "Asymptotically" because it matters for only large values of nnn. "Tight bound" because we've nailed the running time to within a constant factor above and below.

    2. Running time has lower bound and upper bound once the number of computations get large enough.

    3. Computation takes time to run. The notation used for running time of computations is Big-Theta.

  15. Apr 2021
    1. Programming is using a language that a machine can understand in order to get it to perform various tasks. Computer programming is how we communicate with machines in a way that makes them function how we need.
    2. Earning a computer programming degree can help you innovate and create solutions for a global society.

      Can talk about how this applies to other areas/problem-solving/impact on world.

    1. OpenCV Android Application Programming with OpenCV Mastering OpenCV with Practical Computer Vision Projects Practical OpenCV Learning OpenCV

      sách hay vọc OpenCV trên C++ và Java (Nhúng và di động)

  16. Mar 2021
    1. Some believe that computing and internetworking concepts and skills underlie virtually every important aspect of LIS, indeed see LIS as a sub-field of computer science!
  17. Feb 2021
    1. So the hard and unsolvable problem becomes: how up-to-date do you really need to be?
    2. After considering the value we place, and the tradeoffs we make, when it comes to knowing anything of significance, I think it becomes much easier to understand why cache invalidation is one of the hard problems in computer science

      the crux of the problem is: trade-offs

    3. the 2 hardest problems in computer science are essentially the 2 hardest problems of life in general, as far as humans and information are concerned.
    4. The non-determinism is why cache invalidation — and that other hard problem, naming things — are uniquely and intractably hard problems in computer science. Computers can perfectly solve deterministic problems. But they can’t predict when to invalidate a cache because, ultimately, we, the humans who design and build computational processes, can’t agree on when a cache needs to be invalidated.
    5. Sometimes humorously extended as “cache invalidation, naming things, and off-by-one errors.”
    1. There’s only one hard thing in Computer Science: human communication. The most complex part of cache invalidation is figuring out what the heck people mean with the word cache. Once you get that sorted out, the rest is not that complicated; the tools are out there, and they’re pretty good.
  18. Jan 2021
    1. https://hyp.is/go?url=https%3A%2F%2Fwww.archdaily.com%2F627654%2Fthe-computer-vs-the-hand-in-architectural-drawing-archdaily-readers-respond&group=__world__

      I came across this article about the tension between computer drawing and hand drawing in architecture when I replied to an annotation by another user @onion - very interesting read and I would be curious to see this issue revisited in another ten years...how may opinions have changed?

  19. Sep 2020
    1. There was a time when we could install applications, give some sort of explicit agreement that something would run on our computers and use our hardware. That time is ending,

      The end seems perilously close at hand for personal computing, but, imo, as much as anything that is because users now expect to compute to have impact & effect far beyond the beige box.

      Open source has many amazing things, but in terms of ways to get user's digital stuff online & available & circulating, there have been precious few compelling attempts. I'd call out in particular RemoteStorage spec, & the newer SOLID specs from MIT & TBL.

  20. Jun 2020
    1. Plenty of journalists, attorneys, and activists are equally if not more threatened by so-called evil maid attacks, in which a housekeeper or other stranger has the ability to tamper with firmware during brief physical access to a computer.
  21. May 2020
    1. What's terrible and dangerous is a faceless organization deciding to arbitrarily and silently control what I can and can not do with my browser on my computer. Orwell is screaming in his grave right now. This is no different than Mozilla deciding I don't get to visit Tulsi Gabbard's webpage because they don't like her politics, or I don't get to order car parts off amazon because they don't like hyundai, or I don't get to download mods for minecraft, or talk to certain people on facebook.
    2. They don't have to host the extension on their website, but it's absolutely and utterly unacceptable for them to interfere with me choosing to come to github and install it.
    3. I appreciate the vigilance, but it would be even better to actually publish a technical reasoning for why do you folks believe Firefox is above the device owner, and the root user, and why there should be no possibility through any means and configuration protections to enable users to run their own code in the release version of Firefox.
  22. Apr 2020
    1. If the word “share” doesn’t come out of your mouth, you don’t need to use a pointer

      key point

    2. The benefit of passing data “by value” is readability. The value you see in the function call is what is copied and received on the other side

      no hidden cost, eg., memory growth on the heap or pauses during garbage collection. but there is a cost in stack memory usage and "scoping" among multiple stack frames, CPU caching, etc.

    3. Functions execute within the scope of frame boundaries that provide an individual memory space for each respective function. Each frame allows a function to operate within their own context and also provides flow control. A function has direct access to the memory inside its frame, through the frame pointer, but access to memory outside its frame requires indirect access. For a function to access memory outside of its frame, that memory must be shared with the function.

      eg., shared via the "pointer" to an address in heap memory

  23. Feb 2020
    1. Discourses tend to be intertextual and interdiscursive (Reisigl and Wodak, 2001: 39). They interlink various texts, discourses and contexts. Social media data are therefore not independent from other media but tend to be multimodal and connected with texts in traditional media. An example is that many political tweets tend to link to articles in the online versions of mainstream newspapers. Studying social media therefore does not substitute the study of other media but often requires studying various media’s intercon-nection. Discourses are texts that stand in particular societal, political-economic, histori-cal, cultural contexts. Understanding them requires taking a holistic point of view, that is, to situate them in history and society.
  24. Jan 2020
    1. The plural for the small rodent is always "mice" in modern usage. The plural of a computer mouse is either "mouses" or "mice" according to most dictionaries, with "mice" being more common.[4] The first recorded plural usage is "mice"; the online Oxford Dictionaries cites a 1984 use, and earlier uses include J. C. R. Licklider's "The Computer as a Communication Device" of 1968.[5] The term computer mouses may be used informally in some cases. Although the plural of a mouse (small rodent) is mice, the two words have undergone a differentiation through usage.
    2. plural mice
  25. Dec 2019
  26. Nov 2019
    1. TrackMeNot is user-installed and user-managed, residing wholly on users' system and functions without the need for 3rd-party servers or services. Placing users in full control is an essential feature of TrackMeNot, whose purpose is to protect against the unilateral policies set by search companies in their handling of our personal information.
    1. Computer literacy is considered a very important skill to possess. Employers want their workers to have basic computer skills because their company becomes ever more dependent on computers

      Learning the basic about computers is essential to employers.

    1. Author Mary Burns discusses the key elements of computer adaptive testing (CAT). CAT is defined as assessments that use algorithms to progressively adjust test difficulty based upon learner's correct or incorrect responses. The benefits of CAT include more immediate data and are often more reliable. Types of test items are also covered to illustrate how the test can meet various levels of cognition and measure expertise. An issue related to CAT is the intensive time needed to develop multiple test items at multiple levels of cognition. Rating: 8/10

  27. Oct 2019
    1. Best Overall: SanDisk Extreme PRO 128 GB Drive 3.5 Buy on Amazon The SanDisk PRO gives you blistering speeds, offering 420 MB/s on the reading front and 380 MB/s on the writing end, which is 3–4x faster than what a standard USB 3.0 drive will offer. The sleek, aluminum casing is both super durable and very eye-catching, so you can bring it with you to your business meetings and look professional as well. The onboard AES, 128-bit file encryption gives you top-of-the-line security for your sensitive files.
  28. Jun 2019
    1. EthereumEthereum is a distributed computer; each node in the network executes some bytecode (hint: Smart Contracts), and then stores the resulting state in a blockchain. Due to the properties of the blockchain representing application state, this results in “applications that run exactly as programmed without any possibility of downtime, censorship, fraud or third party interference”.

      This is a decent little explanation for how smart contracts execute on blockchains. Author missed in "Due to the properties of the blockchain" to say that all nodes must also come to consensus about how the code was executed and therefore "applications that run exactly...". We will later discuss deterministic code execution in relation to this

  29. May 2019
    1. Go Programming Language publicly in 2009 they were also looking to solve certain challenges of the existing Computer languages. Of the many features that it demonstrated (we will get to those soon enough) it was also helpful in addressing the strange dilemma of hardware and software that was emerging.

      Golang is a modern computing language, designed especially for modern computing needs.

    1. The first difficulty is that the robot’s utility function did not quite match our utility function. Our utility function is 1 if the cauldron is full, 0 if the cauldron is empty, −10 points to whatever the outcome was if the workshop has flooded, +0.2 points if it’s funny, −1,000 points (probably a bit more than that on this scale) if someone gets killed … and it just goes on and on and on.

      But it is very difficult to fully express these utility functions in code. The goal is to literally turn our ethics into code -- to translate them into coherent data structures, algorithms, and decision trees. We want to deduce our moral intuitions and more.

  30. Jan 2019
    1. Automatic Sequence Computer

      The Harvard Mark 1 was an ASCC or an Automated Sequence Controlled Calculator(https://en.wikipedia.org/wiki/Harvard_Mark_I) - so Clarke was using this term for what was possibly the most powerful computer during his time. Our smartphones now are more powerful than several of these ASCCs

    1. Reflective Design Strategies In addition shaping our principles or objectives, our foundational influences and case studies have also helped us articulate strategies for reflective design. The first three strategies identified here speak to characteristics of designs that encourage reflection by users. The second group of strategies provides ways for reflecting on the process of design.

      verbatim from subheads in this section

      1.Provide for interpretive flexibility.

      2.Give users license to participate.

      3.Provide dynamic feedback to users.

      4.Inspire rich feedback from users.

      5.Build technology as a probe.

      6.Invert metaphors and cross boundaries.

    2. Some Reflective Design Challenges

      The reflective design strategies offer potential design interventions but lack advice on how to evaluate them against each other.

      "Designing for appropriation requires recognizing that users already interact with technology not just on a superficial, task-centered level, but with an awareness of the larger social and cultural embeddedness of the activity."

    3. Principles of Reflective Design

      verbatim from subheads in this section

      1. Designers should use reflection to uncover and alter the limitations of design practice

      2. Designers should use reflection to re-understand their own role in the technology design process.

      3. Designers should support users in reflecting on their lives.

      4. Technology should support skepticism about and reinterpretation of its own working.

      5. Reflection is not a separate activity from action but is folded into it as an integral part of experience

      6. Dialogic engagement between designers and users through technology can enhance reflection.

    4. Reflective design, like reflection-in-action, advocates practicing research and design concomitantly, and not only as separate disciplines. We also subscribe to a view of reflection as a fully engaged interaction and not a detached assessment. Finally, we draw from the observation that reflection is often triggered by an element of surprise, where someone moves from knowing-in-action, operating within the status quo, to reflection-in-action, puzzling out what to do next or why the status quo has been disrupted

      Influences from reflection-in-action for reflective design values/methods.

    5. In this effort, reflection-in-action provides a ground for uniting theory and practice; whereas theory presents a view of the world in general principles and abstract problem spaces, practice involves both building within these generalities and breaking them down.

      A more improvisational, intuitive and visceral process of rethinking/challenging the initial design frame.

      Popular with HCI and CSCW designers

    6. CTP is a key method for reflective design, since it offers strategies to bring unconscious values to the fore by creating technical alternatives. In our work, we extend CTP in several ways that make it particularly appropriate for HCI and critical computing.

      Ways in which Senger, et al., describe how to extend CTP for HCI needs:

      • incorporate both designer/user reflection on technology use and its design

      • integrate reflection into design even when there is no specific "technical impasse" or metaphor breakdown

      • driven by critical concerns, not simply technical problems

    7. CTP synthesizes critical reflection with technology production as a way of highlighting and altering unconsciously-held assumptions that are hindering progress in a technical field.

      Definition of critical technical practice.

      This approach is grounded in AI rather than HCI

      (verbatim from the paper) "CTP consists of the following moves:

      • identifying the core metaphors of the field

      • noticing what, when working with those metaphors, remains marginalized

      • inverting the dominant metaphors to bring that margin to the center

      • embodying the alternative as a new technology

    8. Ludic design promotes engagement in the exploration and production of meaning, providing for curiosity, exploration and reflection as key values. In other words, ludic design focuses on reflection and engagement through the experience of using the designed object.

      Definition of ludic design.

      Offers a more playful approach than critical design.

    9. goal is to push design research beyond an agenda of reinforcing values of consumer culture and to instead embody cultural critique in designed artifacts. A critical designer designs objects not to do what users want and value, but to introduce both designers and users to new ways of looking at the world and the role that designed objects can play for them in it.

      Definition of critical design.

      This approach tends to be more art-based and intentionally provocative than a practical design method to inculcate a certain sensibility into the technology design process.

    10. value-sensitive design method (VSD). VSD provides techniques to elucidate and answer values questions during the course of a system's design.

      Definition of value-sensitive design.

      (verbatim from the paper)

      *"VSD employs three methods :

      • conceptual investigations drawing on moral philosophy, which identify stakeholders, fundamental values, and trade-offs among values pertinent to the design

      • empirical investigations using social-science methods to uncover how stakeholders think about and act with respect to the values involved in the system

      • technical investigations which explore the links between specific technical decisions and the values and practices they aid and hinder" *

    11. From participatory design, we draw several core principles, most notably the reflexive recognition of the politics of design practice and a desire to speak to the needs of multiple constituencies in the design process.

      Description of participatory design which has a more political angle than user-centered design, with which it is often equated in HCI

    12. PD strategies tend to be used to support existing practices identified collaboratively by users and designers as a design-worthy project. While values clashes between designers and different users can be elucidated in this collaboration, the values which users and designers share do not necessarily go examined. For reflective design to function as a design practice that opens new cultural possibilities, however, we need to question values which we may unconsciously hold in common. In addition, designers may need to introduce values issues which initially do not interest users or make them uncomfortabl

      Differences between participatory design practices and reflective design

    13. We define 'reflection' as referring tocritical reflection, orbringing unconscious aspects of experience to conscious awareness, thereby making them available for conscious choice. This critical reflection is crucial to both individual freedom and our quality of life in society as a whole, since without it, we unthinkingly adopt attitudes, practices, values, and identities we might not consciously espouse. Additionally, reflection is not a purely cognitive activity, but is folded into all our ways of seeing and experiencing the world.

      Definition of critical reflection

    14. Our perspective on reflection is grounded in critical theory, a Western tradition of critical reflection embodied in various intellectual strands including Marxism, feminism, racial and ethnic studies, media studies and psychoanalysis.

      Definition of critical theory

    15. ritical theory argues that our everyday values, practices, perspectives, and sense of agency and self are strongly shaped by forces and agendas of which we are normally unaware, such as the politics of race, gender, and economics. Critical reflection provides a means to gain some awareness of such forces as a first step toward possible change.

      Critical theory in practice

    16. We believe that, for those concerned about the social implications of the technologies we build, reflection itself should be a core technology design outcome for HCI. That is to say, technology design practices should support both designers and users in ongoing critical reflection about technology and its relationship to human life.

      Critical reflection can/should support designers and users.

  31. Dec 2018
    1. Outliers : All data sets have an expected range of values, and any actual data set also has outliers that fall below or above the expected range. (Space precludes a detailed discussion of how to handle outliers for statistical analysis purposes, see: Barnett & Lewis, 1994 for details.) How to clean outliers strongly depends on the goals of the analysis and the nature of the data.

      Outliers can be signals of unanticipated range of behavior or of errors.

    2. Understanding the structure of the data : In order to clean log data properly, the researcher must understand the meaning of each record, its associated fi elds, and the interpretation of values. Contextual information about the system that produced the log should be associated with the fi le directly (e.g., “Logging system recorded this fi le on 12-3-2012”) so that if necessary the specifi c code that gener-ated the log can be examined to answer questions about the meaning of the record before executing cleaning operations. The potential misinterpretations take many forms, which we illustrate with encoding of missing data and capped data values.

      Context of the data collection and how it is structured is also a critical need.

      Example, coding missing info as "0" risks misinterpretation rather than coding it as NIL, NDN or something distinguishable from other data

    3. Data transformations : The goal of data-cleaning is to preserve the meaning with respect to an intended analysis. A concomitant lesson is that the data-cleaner must track all transformations performed on the data .

      Changes to data during clean up should be annotated.

      Incorporate meta data about the "chain of change" to accompany the written memo

    4. Data Cleaning A basic axiom of log analysis is that the raw data cannot be assumed to correctly and completely represent the data being recorded. Validation is really the point of data cleaning: to understand any errors that might have entered into the data and to transform the data in a way that preserves the meaning while removing noise. Although we discuss web log cleaning in this section, it is important to note that these principles apply more broadly to all kinds of log analysis; small datasets often have similar cleaning issues as massive collections. In this section, we discuss the issues and how they can be addressed. How can logs possibly go wrong ? Logs suffer from a variety of data errors and distortions. The common sources of errors we have seen in practice include:

      Common sources of errors:

      • Missing events

      • Dropped data

      • Misplaced semantics (encoding log events differently)

    5. In addition, real world events, such as the death of a major sports fi gure or a political event can often cause people to interact with a site differently. Again, be vigilant in sanity checking (e.g., look for an unusual number of visitors) and exclude data until things are back to normal.

      Important consideration for temporal event RQs in refugee study -- whether external events influence use of natural disaster metaphors.

    6. Recording accurate and consistent time is often a challenge. Web log fi les record many different timestamps during a search interaction: the time the query was sent from the client, the time it was received by the server, the time results were returned from the server, and the time results were received on the client. Server data is more robust but includes unknown network latencies. In both cases the researcher needs to normalize times and synchronize times across multiple machines. It is common to divide the log data up into “days,” but what counts as a day? Is it all the data from midnight to midnight at some common time reference point or is it all the data from midnight to midnight in the user’s local time zone? Is it important to know if people behave differently in the morning than in the evening? Then local time is important. Is it important to know everything that is happening at a given time? Then all the records should be converted to a common time zone.

      Challenges of using time-based log data are similar to difficulties in the SBTF time study using Slack transcripts, social media, and Google Sheets

    7. Log Studies collect the most natural observations of people as they use systems in whatever ways they typically do, uninfl uenced by experimenters or observers. As the amount of log data that can be collected increases, log studies include many different kinds of people, from all over the world, doing many different kinds of tasks. However, because of the way log data is gathered, much less is known about the people being observed, their intentions or goals, or the contexts in which the observed behaviors occur. Observational log studies allow researchers to form an abstract picture of behavior with an existing system, whereas experimental log stud-ies enable comparisons of two or more systems.

      Benefits of log studies:

      • Complement other types of lab/field studies

      • Provide a portrait of uncensored behavior

      • Easy to capture at scale

      Disadvantages of log studies:

      • Lack of demographic data

      • Non-random sampling bias

      • Provide info on what people are doing but not their "motivations, success or satisfaction"

      • Can lack needed context (software version, what is displayed on screen, etc.)

      Ways to mitigate: Collecting, Cleaning and Using Log Data section

    8. Two common ways to partition log data are by time and by user. Partitioning by time is interesting because log data often contains signifi cant temporal features, such as periodicities (including consistent daily, weekly, and yearly patterns) and spikes in behavior during important events. It is often possible to get an up-to-the- minute picture of how people are behaving with a system from log data by compar-ing past and current behavior.

      Bookmarked for time reference.

      Mentions challenges of accounting for time zones in log data.

    9. An important characteristic of log data is that it captures actual user behavior and not recalled behaviors or subjective impressions of interactions.

      Logs can be captured on client-side (operating systems, applications, or special purpose logging software/hardware) or on server-side (web search engines or e-commerce)

    10. Table 1 Different types of user data in HCI research

    11. Large-scale log data has enabled HCI researchers to observe how information diffuses through social networks in near real-time during crisis situations (Starbird & Palen, 2010 ), characterize how people revisit web pages over time (Adar, Teevan, & Dumais, 2008 ), and compare how different interfaces for supporting email organi-zation infl uence initial uptake and sustained use (Dumais, Cutrell, Cadiz, Jancke, Sarin, & Robbins, 2003 ; Rodden & Leggett, 2010 ).

      Wide variety of uses of log data

    12. Behavioral logs are traces of human behavior seen through the lenses of sensors that capture and record user activity.

      Definition of log data

    1. The distinct sorts of questions asked of science and design manifest the different kinds of accountability that apply to each - that is, the expectations of what activities must be defended and how, and by extension the ways narratives (accounts) are legitimately formed about each endeavour.science is defined by epistemological accountability, in which the essential requirement is to be able to explain and defend the basis of one’s claimed knowledge. Design, in contrast, works with aesthetic accountability, where ‘aesthetic’ refers to how satisfactory the composition of multiple design features are (as opposed to how ‘beautiful’ it might be). The requirement here is to be able to explain and defend – or, more typically, to demonstrate –that one’s design works.

      Scientific accountability >> epistemological

      Design accountability >> aesthetic

    2. The issue of whether something ‘works’ goes beyond questions of technical or practical efficacy to address a host of social, cultural, aesthetic and ethical concerns.

      Intent is the critical factor for design work, not its function.

    3. To be sure, the topicality, novelty or potential benefits of a given line of research might help it attract notice and support, butscientific researchfundamentally stands or falls on the thoroughness with which activities and reasoning can be tied together. You just can’t get in the game without a solid methodology.

      Methodology is the critical factor for scientific study, not the result.

  32. Nov 2018
    1. English Teachers' Barriers to the Use of Computer-assisted Language Learning

      This article discusses the use of Computer-assisted Language Learning (CALL) technologies to teach English. Each stage of learning aligns with a level of computer technology. There are also many barriers that impede the process of integrating the CALL into the classroom, which include financial, access to hardware and software, teacher training, technical knowledge, and acceptance of technology.

      RATING: 8/10


      This article explores how learning styles and computer skills impact student online learning. Further consideration is also given to course format and participants who were first time online learners. This is a complex study that investigates possible skills and abilities of first time online students. It would be interesting to conduct the same study, ten years latter to see if the changes in technology has improved the learners' computer skills and therefore the results of the study.

      RATING: 7/10

  33. Oct 2018
    1. As a recap, Chegg discovered on September 19th a data breach dating back to April that "an unauthorized party" accessed a data base with access to "a Chegg user’s name, email address, shipping address, Chegg username, and hashed Chegg password" but no financial information or social security numbers. The company has not disclosed, or is unsure of, how many of the 40 million users had their personal information stolen.

    1. Questions about the inclusivity of engineering and computer science departments have been going on for quite some time. Several current “innovations” coming out of these fields, many rooted in facial recognition, are indicative of how scientific racism has long been embedded in apparently neutral attempts to measure people — a “new” spin on age-old notions of phrenology and biological determinism, updated with digital capabilities.
  34. Sep 2018
    1. And its very likely that IA is a much easier road to the achievement of superhumanity than pure AI. In humans, the hardest development problems have already been solved. Building up from within ourselves ought to be easier than figuring out what we really are and then building machines that are all of that.

      The authors of the text are proposing a radically different approach to the inevitable "singularity" event. They propose the research and development IA, or Intelligence Amplification, is developing computers with a symbiosis with humans. Noting that IA could be easier to develop than AI algorithms, since humanity had to probe what their true weaknesses and strengths are. In turn, developing an IA system that could cover humanities' weaknesses. This would summarily prevent an IA algorithm from getting over itself, which could potentially slow a point when we reach singularity.

  35. Aug 2018
    1. Earlier communication tools enabled individuals to create a private list ofcontacts (for instance a buddy list on instant messaging), to establish a group of contactsthat were shared by others (such as a listserv membership list), or to publish a list ofrelated links (such as a blogroll), but SNSs extended the practice of creating a publiclyvisible, personally curated list of contacts and made it a mainstream practice.

      Differences between SNS and CMC.

  36. Jun 2018
    1. Remember, the author made a more technical report on this topic. PDF here

    2. if we ever find the translation is dominant in a direction other than forward, we simply ignore that motion.

      Remember, this is just a heuristic

    3. Most Computer Vision algorithms are not complete without a few heuristics thrown in
    4. RANSAC. It is an iterative algorithm. At every iteration, it randomly samples five points from out set of correspondences, estimates the Essential Matrix, and then checks if the other points are inliers when using this essential matrix.
    5. T his step compensates for this lens distortion.
    6. For every pair of images, we need to find the rotation matrix RRR and the translation vector ttt, which describes the motion of the vehicle between the two frames.