- Nov 2024
-
www.biorxiv.org www.biorxiv.org
-
The X,Y-resolution (effective pixel resolution) was similar between the two printers, at 30 and 40 μm for the 405- and 385-nm printers, respectively. In this resin, the crosslinking reaction was more efficient with the 385 nm light source, enabling reduced light dosage (shorter times and lower intensities; Table S1), which assisted in diminishing bleed-through light allowing uncured resin to drain more easily from channels. Consistent with this, channels were printable at sizes ~0.2 mm smaller with the 385 nm printer (Fig. 2C) versus the 405 nm source (Fig. 2B).
Thank you for including this information, I have tried similar to make similar microfluidic channels of different dimensions, but did not consider the differences that light source wavelength would have on different resin types.
-
- Sep 2024
-
www.medrxiv.org www.medrxiv.org
-
MER2-1220-32U3M, Daheng Imaging
It was great reading about an open-sourced imaging platform being used as a low cost method of disease detection. I'm curious how this system works with other cameras and what trade-offs there are between this camera and others you may have bought. I've been using FLIR cameras for open-source affordable imaging projects, but I am not sure that is the best option.
-
-
www.biorxiv.org www.biorxiv.org
-
The results prove that 1-minute mechanical maceration using such a small handheld device can perform almost equivalently in sample lysis to manual grinding using a mortar and pestle.
Really great reading about this device! Has anyone tried using it for nucleus extractions of algae? We have used a modified version of this protocol where we cryo-grind Chlamydomonas with a mortar and pestle, but found it to be laborious and inconsistent. I'd imagine you'd need to make significant modifications to cool the system, though that might be beneficial due to the heat that is generated from the motor
-
- Aug 2024
-
www.biorxiv.org www.biorxiv.org
-
The low-cost imaging platforms presented here provide an opportunity for labs to introduce phenotyping equipment into their research toolkit, and thus increase the efficiency, reproducibility, and thoroughness of their measurements.
I really like this approach to plant imaging, I'm curious what you think about a system like this that allows the camera to move between positions to take time lapse images of even more samples?
-
The low-cost imaging platforms presented here provide an opportunity for labs to introduce phenotyping equipment into their research toolkit, and thus increase the efficiency, reproducibility, and thoroughness of their measurements.
I really like this approach to plant imaging, I'm curious what you think about a system like this that allows the camera to move between positions to take time lapse images of even more samples?
-
- Feb 2024
-
www.biorxiv.org www.biorxiv.org547711701
-
Schematic of experimental approach: New Zealand white rabbits are implanted with a chronic32-channel ECoG grid over visual cortex, and visually-evoked potentials are recorded as highcontrast stimuli are presented using a monitor.
I've really enjoyed reading this paper, learned a lot about optigenetics and ocular implanted devices. I'm curious how you plan to validate the effectiveness of these devices in animals and what kinds of assays would make the most sense.
-
- Jan 2024
-
www.biorxiv.org www.biorxiv.org
-
laboratories.
I really enjoyed reading about this project and am curious about implementing something like this for our lab! It would be great to see a video of it in action. It would also be great to know how quickly COPICK can pick colonies from a plate vs a person in the lab?
-
At present time (early 2023), there is still a lack of open-source tools on the web to label and create datasets of images using a panoptic segmentation format in a straightforward fashion.
Has this changed in the last year? Would love to know where advancements are being made
-
using a reflex camera with a macro objective (Nikon D60 with AF-S Micro NIKKOR 60 mm f/2.8G ED lens)
I'm curious if there are any drawbacks to training the model using a different camera than the one that is implemented on the OT-2?
-