17 Matching Annotations
  1. Jun 2023
    1. Recent work in computer vision has shown that common im-age datasets contain a non-trivial amount of near-duplicateimages. For instance CIFAR-10 has 3.3% overlap betweentrain and test images (Barz & Denzler, 2019). This results inan over-reporting of the generalization performance of ma-chine learning systems.

      CIFAR-10 performance results are overestimates since some of the training data is essentially in the test set.

  2. Dec 2021
  3. Sep 2021
  4. Jul 2021
  5. Mar 2021
  6. Jan 2019
    1. Surface/Interior Depth-Cueing Depth cues can contribute to the three-dimensional quality of projection images by giving perspective to projected structures. The depth-cueing parameters determine whether projected points originating near the viewer appear brighter, while points further away are dimmed linearly with distance. The trade-off for this increased realism is that data points shown in a depth-cued image no longer possess accurate densitometric values. Two kinds of depth-cueing are available: Surface Depth-Cueing and Interior Depth-Cueing. Surface Depth-Cueing works only on nearest-point projections and the nearest-point component of other projections with opacity turned on. Interior Depth-Cueing works only on brightest-point projections. For both kinds, depth-cueing is turned off when set to zero (i.e.100% of intensity in back to 100% of intensity in front) and is on when set at 0 < n 100 (i.e.(100 − n)% of intensity in back to 100% intensity in front). Having independent depth-cueing for surface (nearest-point) and interior (brightest-point) allows for more visualization possibilities.
    2. Opacity Can be used to reveal hidden spatial relationships, especially on overlapping objects of different colors and dimensions. The (surface) Opacity parameter permits the display of weighted combinations of nearest-point projection with either of the other two methods, often giving the observer the ability to view inner structures through translucent outer surfaces. To enable this feature, set Opacity to a value greater than zero and select either Mean Value or Brightest Point projection.
    3. Interpolate Check Interpolate to generate a temporary z-scaled stack that is used to generate the projections. Z-scaling eliminates the gaps seen in projections of volumes with slice spacing greater than 1.0 pixels. This option is equivalent to using the Scale plugin from the TransformJ package to scale the stack in the z-dimension by the slice spacing (in pixels). This checkbox is ignored if the slice spacing is less than or equal to 1.0 pixels.
    4. Lower/Upper Transparency Bound Determine the transparency of structures in the volume. Projection calculations disregard points having values less than the lower threshold or greater than the upper threshold. Setting these thresholds permits making background points (those not belonging to any structure) invisible. By setting appropriate thresholds, you can strip away layers having reasonably uniform and unique intensity values and highlight (or make invisible) inner structures. Note that you can also use Image▷Adjust▷Threshold… [T]↑ to set the transparency bounds.
    1. Orchestrating the execution of many command line tools is a task for Galaxy, while an analysis of life science data with subsequent statistical analysis and visualization is best carried out in KNIME or Orange. Orange with its “ad-hoc” execution of nodes caters to scientists doing quick analyses on small amounts of data, while KNIME is built from the ground up for large tables and images. Noteworthy is that none of the mentioned tools provide image processing capabilities as extensive as those of the KNIME Image Processing plugin (KNIP).
    2. In conclusion, the KNIME Image Processing extensions not only enable scientists to easily mix-and-match image processing algorithms with tools from other domains (e.g. machine-learning), scripting languages (e.g. R or Python) or perform a cross-domain analysis using heterogenous data-types (e.g. molecules or sequences), they also open the doors for explorative design of bioimage analysis workflows and their application to process hundreds of thousands of images.
    3. In order to further foster this “write once, run anywhere” framework, several independent projects collaborated closely in order to create ImageJ-Ops, an extensible Java framework for image processing algorithms. ImageJ-Ops allows image processing algorithms to be used within a wide range of scientific applications, particularly KNIME and ImageJ and consequently, users need not choose between those applications, but can take advantage of both worlds seamlessly.
    4. Most notably, integrating with ImageJ2 and FIJI allows scientists to easily turn ImageJ2 plugins into KNIME nodes, without having to be able to script or program a single line of code
  7. Oct 2016
    1. Esempi noti di operatori spaziali sono il filtro media, che calcola la media aritmetica dei pixel all'interno della "finestra" e impone tale valore, e il filtro mediano, il quale invece calcola la mediana statistica.

      Alcuni dei principali operatori spaziali utilizzati nell'image processing.

  8. Jun 2016
    1. A case in point is the obliterated text between syððan and þ on fol. 179r10. Any attempt at restoration is complicated by the fact that some of the ink traces, as conclusively shown by an overlay in Electronic Beowulf 4.0, come from an offset from the facing fol. 178v. Digital technology allows us to subtract these false leads and arrive at a more plausible restoration

      Great use of image processing to estimate what could be the conjectural readings.