75 Matching Annotations
  1. Nov 2018
    1. 19. Y. Lin, R. Jin, D. Cai, S. Yan, X. Li, in 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE Computer Society, 2013), pp. 446–451.

      Lin et al. address a known issue in hashing methods, which is that they often require a large number of hash tables to be effective.

      To solve this problem, the authors propose a novel approach called compressed hashing. This approach uses sparse coding and is shown to be more effective than previously known hashing algorithms.

    2. 6. E. A. Hallem, J. R. Carlson, Cell 125, 143–160 (2006).

      Hallem and Carlson analyze how the fly olfactory circuit encodes information about the specific qualities—such as quality, quantity, and duration–of an individual odor.

      They test over 100 different odors and create an "odor space" that shows how the responses of olfactory receptors depend on an odor's chemical class, concentration, and molecular complexity.

    3. 4. A. C. Lin, A. M. Bygrave, A. de Calignon, T. Lee, G. Miesenböck, Nat. Neurosci. 17, 559–568 (2014).

      Lin et al. consider the role of sparse coding in helping organisms remember specific odors.

      They discover that sparsity in the olfactory circuit is controlled by a "negative feedback circuit" involving Kenyon cells and anterior paired lateral neurons. From their results, they suggest that sparse coding is important for helping flies store a large number of odor-specific memories without confusing or overlapping information.

    4. strategies for similarity matching in the brain (21) and hashing algorithms for nearest-neighbors search in large-scale information retrieval systems.

      In this paper, the authors were able to use their knowledge of the fly olfactory circuit to improve the performance of a standard LSH algorithm. This suggests that the brain uses computational strategies for similarity searches that are better than what are currently used in data science.

      By studying the way the brain computes information, scientists can create faster and more efficient algorithms for solving the nearest-neighbor search problems that arise in everyday life (e.g. databases and search engines).

    5. segregate

      Set apart/isolate.

    6. Replacing the dense Gaussian random projection of LSH with a sparse binary random projection did not hurt how precisely nearest neighbors could be identified

      A typical LSH algorithm uses a dense Gaussian random projection, while the fly algorithm uses a sparse binary random projection. To test whether this made any difference in the performance of the two algorithms, the authors modified their LSH algorithm to use sparse binary random projections instead.

      The result was that this made no noticeable difference in the performance of the two algorithms (see Figure 2A).

    7. mean average precision

      The mean average precision (MAP) is a way of measuring how well an algorithm is doing at a task.

      Here, the authors took each algorithm (fly and LSH) and ran it against each data set (SIFT, GLOVE, and MNIST) for 1,000 different query inputs. This produced a list—also called a rank—of all the images or words that were similar to the query. Each item in the rank comes with a certain precision value that represents its accuracy.

      Next, the authors added up all the precision values in the ranking (with the top-ranked values carrying the most weight) and divided the sum by the total number of items in the ranking to get the average precision (AP). They then took the average of all 1,000 APs (one for each query) to get the MAP.

      Calculating a MAP for each algorithm and data set allowed the authors to compare the effectiveness of the algorithms.

    8. Euclidean distance

      The straight-line distance between two points in Euclidean space. Euclidean space is the traditional, three dimensional x,y,z space that is commonly used in geometry.

    9. We compared the two algorithms for finding nearest neighbors in three benchmark data sets: SIFT (d = 128), GLOVE (d = 300), and MNIST (d = 784)

      SIFT, GLOVE, and MNIST are three well-known data sets that researchers and computer scientists often use to test similarity search algorithms.

      MNIST is the Modified National Standards of Instruments and Technology data set. It consists of handwritten digits, and each row of the matrix corresponds to an image.

      SIFT stands for Scale-Invariant Feature Transform. It consists of reference images and can be used to extract specific features of those images (e.g. the corners of a door frame).

      GLOVE, or Global Vectors for Word Representation, consists of word vectors. It is useful for measuring the linguistic or semantic similarity between two words.

    10. empirically

      By means of testing or observation rather than pure theory.

    11. query

      A question or search. A query image is the image "in question"—in other words, the image you want to find similar matches to.

    12. Each KC receives and sums the firing rates from about six randomly selected PNs

      Caron et al. developed a special tracing technique that allowed them to track the connections between KCs and PNs using a glowing, light-activated protein.

      By observing the activity between the two, they found that KCs combine input information from a random combination of PNs. That is, they found no specific pattern or organization that relates the type of odor to the PNs that are activated. In addition, they didn't identify any preferred pathways between groups of PNs and individual KCs.

      To learn more about the glowing, light-activated protein used in this research, check out this annotated resource from Science in the Classroom.

    13. binary random connection matrix

      A connection matrix is a table of rows and columns, each consisting of positions on a graph. In each entry, the matrix has a 1 or 0 depending on whether the row positions and column positions are adjacent (1) or not (0).

      It is random because the connections (representing connections between PNs and KCs) are random, and it is binary because it uses only ones and zeros.

    14. Kenyon cells

      One of the types of cells found in the "mushroom bodies" of the olfactory system. Mushroom bodies are important for olfactory learning and memory and—you guessed it—look like mushrooms.

    15. For the PNs, this concentration dependence is removed (7, 8); that is, the distribution of firing rates across the 50 PN types is exponential, with close to the same mean for all odors and all odor concentrations

      In 2016, Stevens determined that PNs in the olfactory circuit use what's called a "maximum entropy code."

      A maximum entropy distribution means that the olfactory circuit can encode the most number of stimuli using a fixed number of neurons. This means that even though the average firing rate of all the PNs will depend on the concentration of a specific odor, the response to all odors is statistically the same.

      In other words, if you took 10 PNs and 10 odors, recorded the firing rate of each of the PNs in response to each of the odors, and created a histogram from the data, that histogram would always look the same—regardless of which 10 neurons and which 10 odors you chose.

    16. only a small fraction of the neurons that receive odor information respond to each odor

      Lin et al. discovered the neurological process that controls why so few olfactory neurons make up each odor "tag."

      The reason is because when the cells that receive odor information, called Kenyon cells (KCs), become active, the anterior paired lateral neuron responds by slowing the activity of the KCs. The more active the KCs become, the harder the anterior paired lateral neuron works to block their activity. The few KCs that are left active after this process form the odor tag.

    17. stimuli

      Any thing or event that causes a neuron or set of neurons to become active.

      When a nerve cell is activated, it triggers an action potential, which is a temporary change in the electric charge of the cell relative to its surroundings. This depolarization causes nearby locations to similarly depolarize (think about a line of dominoes falling down) and an electrical signal travels through the brain.

      On a graph, this action potential looks like a brief spike, which is why scientists sometimes refer to this process of neurons releasing action potentials as "spiking."

    18. olfactory

      Relating to the nose/sense of smell.

  2. Oct 2018
    1. 9. S. J. Caron, V. Ruta, L. F. Abbott, R. Axel, Nature 497, 113–117 (2013).

      Caron et al. investigate the specific process by which Kenyon cells integrate information sent to them from glomeruli.

      Their results suggest that the input to KCs is random; that is, there is no discernible organization in how the glomeruli project information to individual KCs.

    2. 17. Z. Allen-Zhu, R. Gelashvili, S. Micali, N. Shavit, Proc. Natl. Acad. Sci. U.S.A. 111, 16872–16876 (2014).

      Zhu et al. consider whether the physical properties of neural tissue are actually conducive to performing Johnson-Lindenstrauss (JL) calculations in the brain. For example, the fact that neurons are either inhibitory or excitatory has certain implications for the signs (+ or -) in a JL matrix.

      They determine that it is indeed possible to construct JL matrices that accurately reflect the physical properties of the brain.

    3. We then found the top 2% of predicted nearest neighbors in m-dimensional hash space

      For the fly, the data is organized into a higher dimensional space. For LSH, it is organized in a lower dimensional space. But if the algorithms work effectively, the data should still be arranged so that similar features are near one another.

      Instead of feature vectors, the algorithms arrange items in hashes or "tags" (just like the fly brain uses a tag to represent an odor, the algorithms use a tag to represent a specific image or word). Finding the hashes that are closest to each other in this new m-dimensional space should reveal the images/words that are most similar to each other (again, if the algorithm works correctly).

      These nearby hashes are called "predicted nearest neighbors" because they predict which items in the data set are the most similar.

    4. This proves that the fly’s circuit represents a previously unknown LSH family.

      One of the properties of an LSH algorithm is that it can organize input points into a new, smaller-dimensional matrix while preserving the distances between points. The authors proved that the fly algorithm also has this property but uses a different approach than other known LSH options. The different approach, in part, is to expand the dimension, as opposed to contracting the dimension.

      Therefore, the authors conclude that the fly's approach represents a completely new category of LSH algorithms.

    5. we show analytically that sparse, binary random projections of the type in the fly olfactory circuit generate tags that preserve the neighborhood structure of input points.

      In a mathematical proof, the authors took a matrix of input odors and multiplied it by a vector of projection values to create a new matrix of Kenyon Cell values. In this new matrix, the relative positions of the input odors were approximately the same as they were in the original matrix.

      This shows that the fly algorithm preserves the distance between input points when sending information from PNs to KCs.

    6. a mean that depends on the concentration of the odor

      Hallem and Carlson conducted an analysis to determine how the fly's olfactory neurons would change their behavior in response to different types and concentrations of odor.

      They found that the average strength of the receptors' response (i.e. the number of times the neurons were triggered) was dependent on the concentration of the odor. High concentrations brought on strong responses (a high rate of triggered neurons) from multiple receptor neurons, while low concentrations elicited a slower rate of neurons firing.

    7. exponential

      An exponential distribution is a type of probability distribution, In general, a probability distribution shows the probability that an event (such as a neuron firing) will occur over a given time interval. Specifically, an exponential distribution describes a situation where the probability that the event will occur is proportional to the length of the time interval.

      Note: In the image, the x-axis is time, the y-axis is the probability, 1/λ is the mean (average) of the distribution. Each lamda(λ) curve represents an exponential distribution with a different mean.

    8. glomeruli

      Spherical structures located near the surface of the antennae lobe in the fly's brain.

    9. sparse

      Thinly populated. Here, it means that each tag consists of only a small percentage of the neurons present in the olfactory circuit. That is, only a small percentage of neurons fire action potentials in response to the odor.

    10. The fly olfactory circuit generates a “tag” for each odor, which is a set of neurons that fire when that odor is presented

      In 2015, Stevens identified a three-step code that the fly olfactory circuit uses to identify and respond to specific odors.

      1. Receptor neurons send odor information to specialized structures called glomeruli, which contain projection neurons.
      2. Projection neurons send the information to Kenyon cells in the mushroom body.
      3. These Kenyon cells form the odor label, or tag, and then pass the information along to another stage of neurons that then direct the fly's behavior.
    11. approximate similarity (or nearest-neighbors) search

      A type of algorithm that takes a point in a data set and then seeks to find other points in the set that are most similar to it.

    12. fruit fly

      Fruit flies have a long history of use in scientific (and especially medical) research. But why are they so popular? As insects, how do they contribute to our understanding of humans and other mammals?

      Fruit flies can perform many of the same essential neural functions (albeit in a simpler form) as mammalian brains, and they are very tractable to study (there are genetic tools to measure and manipulate neural circuitry, and their brain size is order of magnitudes smaller than mice or other mammals). The hope is that by understanding fruit flies, it will offer researchers a basis or framework for studying more sophisticated brains.

      Read more in The Guardian: https://www.theguardian.com/science/2016/sep/25/in-praise-of-the-humble-fruit-fly-genetics-research

  3. Sep 2018
    1. Johnson-Lindenstrauss

      This is a mathematical statement that shows that you can take a small number of points in a high-dimensional space and transfer them to a lower-dimensional space while (very nearly) maintaining the distance between all the points.

      You could think of this like taking all the points on a sculpture and projecting them onto a piece of paper while keeping every point in its proper position relative to all the other points on the sculpture.

    2. we used 20k random projections for the fly to equate the number of mathematical operations used by the fly and LSH

      As stated earlier in the article, the authors could only fairly compare the LSH and fly algorithms if each used the same number of mathematical operations.

      They determined that if they used 20k random projections in the fly algorithm (where k is the length of the output tag, or hash) then the total number of operations for the fly and LSH algorithms would be equal.

      For more detail, see the second paragraph under "Materials and Methods" in the Supplementary Materials document.

    3. This led us to conjecture that the fly’s circuit produces tags that are locality-sensitive

      AP Science Practices, Practice 6: The student can work with scientific explanations and theories.

      Students should be able to articulate the authors' reasoning that led them to develop this theory. Looking at the results of the paper, students may also list specific conclusions that support or refute this theory.

      https://apcentral.collegeboard.org/courses/resources/science-practices

    4. inhibitory neuron

      These cells send signals to surrounding neurons that make those cells less likely to fire.

    5. feedforward

      This describes an element in a system that passes information from a source (in this case, from odorant receptor neurons) forward to another element in the system (in this case, projection neurons).

    6. The tag for an odor is computed by a three-step procedure

      AP Science Practices, Practice 1: The student can use representations and models to communicate scientific phenomena and solve scientific problems.

      Looking at the model in Figure 1A, students should be able to provide a verbal explanation of what the diagram is showing and what each shape and symbol represents.

      https://apcentral.collegeboard.org/courses/resources/science-practices

    7. circuit

      A group of neurons connected by synapses (structures that allow electrical signals to pass between neurons) that carry out a specific function in the brain.

    8. machine learning

      A type of computer algorithm which relies on a large amount of input data to make a future decision about a new data point.

      How do our brains use machine learning algorithms to make decisions? This article explores how machine learning affects everything from our belief systems to our online shopping habits.

      Read more in Forbes magazine: https://www.forbes.com/sites/annapowers/2017/12/31/is-our-mind-a-machine-learning-algorithm/

    9. Similarity search—for example, identifying similar images in a database or similar documents on the web—is a fundamental computing problem faced by large-scale information retrieval systems.

      A Framework for K-12 Science Education: Disciplinary Core Ideas, Practice ETS1.A: Defining and Delimiting an Engineering Problem.

      After reading and engaging with the article, students should be able to define:

      1. What a similarity search is
      2. What the limitations are with traditional LSH algorithms
      3. How the fly algorithm helps improve similarity searches

      https://www.nap.edu/read/13165/chapter/12#203

    10. may also appear in other brain regions and species

      AP Biology Practices Essential Knowledge 1.B.1: Students should be able to discuss how this statement supports the idea that "Organisms share many conserved core processes and features that evolved and are widely distributed among organisms today."

      http://media.collegeboard.com/digitalServices/pdf/ap/10b_2727_AP_Biology_CF_WEB_110128.pdf

    11. Our goal was to fairly compare two conceptually different approaches for the nearest-neighbors search problem

      A Framework for K-12 Science Education: Science and Engineering Practices, Practice 7: Students should be able to evaluate whether the authors of this study accomplished their goal, citing specific evidence from the paper to support their argument.

      https://www.nap.edu/read/13165/chapter/7

    12. Imagine that you are provided an image of an elephant and seek to find the 100 images—out of the billions of images on the web—that look most similar to your elephant image.

      Common Core State Standards English Language Arts-Literacy, RST 11-12.6: Students should be able to explain why the authors provided this example in the paper, specifically addressing how it adds to the reader's understanding of the research.

      http://www.corestandards.org/ELA-Literacy/RST/11-12/

    13. we selected 1000 random query inputs from the 10,000 and compared true versus predicted nearest neighbors

      From the 10,000 vectors, the authors randomly chose 1,000 of them to be query inputs (images or words that they would test against the rest of the feature vectors to find which ones represent similar images or words).

    14. We used a subset of each data set with 10,000 inputs each, in which each input was represented as a feature vector in d-dimensional space

      They chose 10,000 vectors of length d from each data set. Each vector represented an image (SIFT, MNIST), or word (GLOVE).

    15. computed the overlap between the ranked lists of true and predicted nearest neighbors

      This is a method of measuring how well an algorithm performed. The more overlap between the two lists, the more similar they are. If they are exactly the same, this means that the algorithm performed perfectly.

      For example, it might mean that it found the closest match to all query images, or that it correctly found all the correct features that match part of an image (see picture below).

    16. determined on the basis of Euclidean distance between feature vectors

      Each feature vector contains a row of values representing features of an image or word (depending on the data set). Because the data is arranged so that similar features are near each other, measuring the straight-line (Euclidean) distance between two vectors allows the authors to determine the similarity between them.

      That is, the closer the vectors, the more similar their features. The vectors closest to one another are the "true nearest neighbors," or the images/words in the data set most similar to one another.

    17. synergy

      An interaction between two elements that creates a combined effect greater than the sum of their parts.

    18. Thus, locality-sensitive hashing may be a general principle of computation used in the brain

      The authors note that the specific process that the fly olfactory circuit uses to identify odors is similar to what other researchers have seen in other animals and other parts of the brain (see Table 1).

      From this observation, they conclude that the locality-sensitive hashing approach that they've identified in the fly olfactory circuit may not be limited to flies or even to odor recognition, but could be a standard processing technique that the brain uses for many different purposes.

    19. With further expansion of the dimensionality (from 20k to 10d KCs, closer to the actual fly’s circuitry), we obtained further gains relative to LSH in identifying nearest neighbors across all data sets and hash lengths

      For all three data sets (SIFT, MNIST, and GLOVE), the authors were able to improve the performance of the LSH algorithm by increasing the number of Kenyon cells that they used in the calculation. That is, using more KCs made it easier for the LSH algorithm to correctly identify the nearest neighbors (most similar images or words) in the data sets.

    20. sparsifying the tag using WTA resulted in better performance than using random tag selection

      The authors looked at how the LSH algorithm performed when using two different methods to create the tag:

      1. The tag is created from a random selection of Kenyon cells
      2. The tag is created from the Kenyon cells with the highest firing rates (this is the "winner takes all" or WTA approach)

      The result was that the LSH performed better with the WTA approach. Note that the authors measured performance using the mean average precision (see Fig. 2B)

    21. 15. S. Dasgupta, A. Gupta, Random Structures Algorithms 22, 60–65 (2003).

      Dasgupta and Gupta prove one result of the Johnson-Lindenstrauss theorem. This proof shows that a set of points in high-dimensional space can be mapped into a smaller dimensional space such that the distance between any two points changes only by a very small amount (1 ± ε).

      This result forms the foundation of many LSH algorithms.

    22. 10. A. Andoni, P. Indyk, Commun. ACM 51, 117 (2008).

      Andoni and Indyk provide an overview of efficient algorithms for nearest neighbor search problems.

      They focus specifically on locality-sensitive hashing algorithms, highlighting one specific option within the LSH family that provides near-optimal performance on nearest neighbor search problems.

    23. 8. S. R. Olsen, V. Bhandawat, R. I. Wilson, Neuron 66, 287–299 (2010).

      Olsen, Bhandawat, and Wilson describe one specific type of computation that occurs in olfactory processing in flies.

      They show that this computation, known as divisive normalization, helps to equalize responses to different input stimuli. This same computation is also known to serve a similar purpose in many parts of the visual system.

    24. 2. D. Owald, S. Waddell, Curr. Opin. Neurobiol. 35, 178–184 (2015).

      Owald and Waddell explore how memories of odors manifest in the fly's olfactory circuit and drive fly behavior

      Specifically, they discover that dopamine plays a key role in this process. During learning, dopaminergic neurons reactivate Kenyon cells associated with specific odor tags. In this way, the fly recalls previously-learned odors.

    25. 1. C. F. Stevens, Proc. Natl. Acad. Sci. U.S.A. 112, 9460–9465 (2015).

      Stevens outlines the three-layer architecture that makes up the fly's olfactory circuit.

      He also presents the idea of a unique odor label, or "tag" that is comprised of a small set of neurons and helps the fly identify distinct odors.

    26. the first step in the circuit essentially “centers the mean”—a standard preprocessing step in many computational pipelines

      Olsen, Bhandawat, and Wilson showed that the fly olfactory circuit uses a specific type of processing known as divisive normalization. This process ensures that the strength of a neuron's output stays within a certain range.

      This is important because a complex stimulus (e.g. one consisting of multiple odors) might elicit several different responses from a single neuron based on all the different aspects of the stimulus (type, concentration, molecular make-up, etc.). But thanks to divisive normalization, a neuron's response will look more like an average of all its responses rather than the sum of them.

    27. all but the highest-firing 5% of KCs are silenced

      Turner, Bazhenov, and Laurent quantified the sparsity of KCs by recording the spiking activity of the cells in response to input from the PNs. They discovered that at least three factors explain why such a small number (~5%) become active:

      1. The inputs from PNs last a very short time, which doesn't give KCs much time to build up their activity.
      2. Only a few PNs connect to each KC, so overall input levels are low.
      3. KC firing thresholds are high, meaning each KC must have very strong inputs in order to obtain the energy it needs to fire.
    28. This is called the nearest-neighbors search problem, which is of fundamental importance in information retrieval, data compression, and machine learning

      Andoni and Indyk explored how LSH algorithms can be applied to solving nearest-neighbor search problems. They showed that nearest-neighbor search problems are useful for any task where you want to find two similar objects, whether you're trying to retrieve information, recognize a pattern, or analyze data.

    29. its many variants (16–18) provide strong theoretical bounds on how well locality is preserved when embedding data from d into m dimensions by using various types of random projections.

      Achlioptas explored two specific applications of the Johnson-Lindenstrauss lemma that are particularly relevant to computer database applications. In doing so, he discovered a method of projecting points from a higher to lower dimensional space that was simpler and faster than other known methods. In fact, he showed that it was possible to perform this faster embedding process without affecting the quality of the results.

  4. Aug 2018
    1. This tag is critical for learning behavioral responses to different odors

      Owald and Waddell discovered that the fly olfactory circuit is able to recall previously-learned odors through the help of specialized dopamine neurons. After a fly smells an odor, these dopamine neurons trigger changes in parts of the olfactory circuit that cause the specific neurons associated with the odor (aka the "tag") to fire.

      Reactivating the tag causes the fly to remember the odor, as well as the values/meanings/context associated with it. The fly can then exhibit the appropriate behavior based on its prior learning (e.g. avoidance if the odor is associated with danger or approach if the odor is associated with a reward).

    2. nearest neighbors

      A "nearest neighbors" search takes a specific item and looks for other nearby items that are similar to it. This is a very common problem in everyday life (for example, a business owner who wants to find other coffee shops near hers, or a dating profile that wants to find compatible suitors nearby). But how exactly do nearest neighbors searches work? And how difficult are they to implement?

      Read more in Quanta Magazine: https://www.quantamagazine.org/universal-method-to-sort-complex-information-found-20180813/

    3. hashing

      The process of taking a data set of any size and mapping (or organizing it) into a specific structure of a set size, such as a table. This makes it easy to find the data later if you need it.

      A good example would be books in a library. By assigning each book a call number associated with its location, you can easily look up the book in a database and see where you should go in the library to find it.

    4. discretizing

      To represent by breaking up into individual, separate quantities.

    5. motif

      A distinctive, recurrent pattern of activity.

    6. Gaussian random projections

      This is a matrix that is generated from a Gaussian distribution. In a Gaussian distribution, also called a normal distribution, the mean value has the highest probability of occurring, and values further away from the mean in both directions have a smaller chance of occurring. An example is a bell curve of grades, where most students get an average C grade and only a few students get either Fs or As.

    7. divisive normalization

      This is a common technique the brain uses in many different sensory systems (vision, hearing, smell, etc.). The calculation incorporates information about the neuron's receptive field (the area in which it is sensitive to a stimulus) as well as a measure of the contrast (difference) between local stimuli.

    8. ported

      Transferred.

    9. clustering

      In statistics, this is the process of organizing a set of objects (such as data points) into groups (or clusters) based on similar features.

    10. semantic

      Related to the meaning of words in a language.

    11. lemma

      Known in mathematics as a "helping theorem," this is a proposition that forms one part of a larger or more general theorem.

    12. distance metric

      A mathematical expression that defines the distance between any two elements in a set. Finding the distance between each of the elements in two feature vectors (each representing an image) would let you determine how similar those two images are. That is, the smaller the distance between the elements, the more similar the images are.

    13. vector

      A row of numbers.

    14. inhibitory

      Blocking an action or making it less likely to occur.

    15. locality-sensitive

      A type of hashing that aims to reduce the size of a large data set by mapping it into a table with fewer entries than pieces of data. To do this, similar pieces of data are assigned to the same entry in the table.

    16. algorithm

      A set of instructions that tells a computer how to carry out a task or solve a problem.