This cookie is set by GDPR Cookie Consent plugin. The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". The cookie is used to store the user consent for the cookies in the category "Analytics". Set by the GDPR Cookie Consent plugin, this cookie is used to record the user consent for the cookies in the "Advertisement" category. These cookies ensure basic functionalities and security features of the website, anonymously. To set a max displayed value higher than 255 in QuPath for RGB images, you just need to double click on the upper bound of the B&C window.Necessary cookies are absolutely essential for the website to function properly. displayed img1+img2 → 180 (which is If you set a max displayed value higher than 255, for instance with a max at 512 instead of 256, you get: In QuPath as in BigDataViewer, there is a scaling that you can apply from the data RGB value to the rendered RGB value, allowing to avoid saturation when blending with sum: For brightfield images, blending a pixel of intensity 170 (img1) with another of intensity 190 (img2) will get you a white saturated displayed pixel (170+190 > 255). If I understand correctly, the problem is the rendering/blending of 2 rgb images. But I don’t really get why BDV is giving a better color rendering that QuPath, since QuPath has more built-in options, like color deconvolution. It’s doable to some extent to export the view from BigDataViewer, it’s clunky but it works (I can make a small gif if you think that’s useful). Top-left: single-stain #1 (i.e., moving image), top-right: color converted single-stain #1 (note: just for visualization purposes), bottom-left: single-stain #2 (i.e., target image), bottom-right: overlay of alignment of single-stain #1 and single-stain #2 I’ve also tried exporting images in ome.tiff format to be color corrected elsewhere (e.g., a Python environment) but creation of these images has been computationally expensive given the image size. I’ve tried other color combinations but these settings seem to be the best thus far. I selected these image types so that I can express the overlay using eosin and DAB channels, respectively, and to visually differentiate the individual stained sections (bit of a cheat). Technical specification of output images: Image type: Single-stain #1 (i.e., moving) = H&E-other, Single-stain #2 (i.e., target) = H&E-DAB Technical specifications of input images:įile type =. Downstream analyses will involve manual region/patch- and cellular-level annotations so an optimal visualization of the alignment overlay is important. Coercing the background color to white or off-white would be ideal. More specifically, the color selection should provide a greater contrast between the overlay and the background. My group is very satisfied with the alignment quality but I’ve been asked to improve the visualization itself. In the example below, I show two sample single-stain IHC images and the resulting alignment overlay. I’ve loaded multiple series of single-stain IHC images into QuPath and then aligned them using the Warpy plug-in in ImageJ/Fiji.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |