This document aims to compile the documents that were created during the QuPath Workshop and Hackathon which took place from April 17th till April 20th
= Questions & Answers =
== Is it possible to open confocal images (lsm files)? ==
Yes. But you'll need the QuPath BioFormats Extension, which you'll need to install [[ https://github.com/qupath/qupath-bioformats-extension | according to the instructions on GitHub ]]
== Can there be more than 2 stains? ==
The current color deconvolution can technically work with 3 stains. This however will be the future job of the pixel classification extension.
To try it with different stains, you need to set the Image Type to **Brigthfield (Other)** under the Image Tab. You will need to estimate the stain vectors using ImageJ's Color Deconvolution plugin and enter them manually. Otherwise you can create a **Recangular Annotation** and then double click on the Stain # value you want to set.
Careful to choose regions that have a "pure" component.
== Are Z-Stacks supported? ==
Yes, QuPath can read Z Stacks as well as time series. It will not, however, perform 3D detections and does not support 3D annotations
== Are bit depths other than 8-bit supported? ==
Yes, but the support is currently limited, as QuPath was originally meant to work on RGB images only. The capacity top open images other than 8-bit will depend on the image reader, and currently it is possible with BioFormats, but not ideal. Expect improved support in the future.
== Can a custom version of ImageJ (e.g. FIJI) be used instead of the one shipped with QuPath ==
Like ICY, QuPath runs its own instance of ImageJ. You could replace the JAR files with other 'simple' ImageJ flavors, but Fiji, which contains both ImageJ1 and ImageJ2 is currently too contrived to be supported.
The **suggested** way to use and extend the ImageJ functionality is to point QuPath to your current ImageJ's plugins folder. Any dependencies would have to be in the `jars` folder of QuPAth.
== Can the macro runner run ImageJ macros (.ijm) only or also scripts written in a different language? ==
Currently the Macro Runner only runs ImageJ1 macros.
== Can we set a different threshold of cell detection in a specific area ? ==
Yes, but through a script. To make it work, you would define different classes for your annotations. then going through them, for each annotation you would launch a new 'cell detection' based on the class.
== What is the Cell Detection behaviour on fluorescence images? ==
If the image type is set to Fluorescence (from the Image tab), Cell detection will ask for the channel to use for detecting and offer identical parameters as for brightfield. The only difference will usually be in the value of the threshold, which will typically be higher than the one used for Brigthfield.
NOTE: The Positive Cell Detection extension will **not** allow you to select the threshold on another channel to classify your detections. You can currently do this through a script though.
=== Example Script ===
```lang=java
// Set the cells with an average nuclear intensity on channel 3 above 40 as positive
setCellIntensityClassifications("Nucleus: Channel 3 mean",40)
// The name of the measurement corresponds to the column name in Measure > Show Detection Measurements
```
== Would it be possible to duplicate an annotation from one z plane to another z plane in the same image? ==
It is rather contrived but we have a script for that
```lang=java
import qupath.lib.objects.PathAnnotationObject
import qupath.lib.roi.PathROIToolsAwt
// Duplicate an object to another Z Slice (Or timepoint)
def newZ = 2
// Get the selected object
def currentObject = getSelectedObject()
// Get the current image's hierarchy
def hierarchy = getCurrentHierarchy()
// We cannot just duplicate and set the Z right now, as it is set the by the ROI
// The ROI cannot be modified (it is immutable) so
// We must create a new PathAnnotaionObject
// Get the object's ROI
def roi = currentObject.getROI()
// Convert it to a shape (the most basic way to define an object)
def shape = PathROIToolsAwt.getShape(roi)
// There is a method to create a ROI from a shape which allows us to (finally) set the Z (or T)
def roi2 = PathROIToolsAwt.getShapeROI(shape, -1, newZ, roi.getT(), 0.5)
// We can now make a new annotation
def annotation = new PathAnnotationObject(roi2)
// Add it to the current hierarchy. When we move in Z to the desired slice, we should see the annotation
hierarchy.addPathObject(annotation, false)
```
NOTE: This script was offered to users who had issues with VSI files where channels were being treated as different Z slices. A fix of the QuPath Bioformats Extension should fix the need for this in that particular case.
== How many channels are supported by QuPath in Fluorescence? ==
QuPath supports an arbitrary number of channels, but there is a known bug where it will not be possible to set the brightness and contrast to the 4th channel of a 4 channel image!
== Does changing brightness/contrast affect measured values? ==
This does not affect the values of the measurements. The only effect adjusting B/C has is in the behavior of the Magic Wand tool. If you feel the tool is being too 'strict' (Selecting less than you would expect), try loweing the contrast by setting the min and max values from the B/C farther apart. Accordingly, if the magic wand is too 'inclusive', try increasing the contrast to make it more strict.
== Is there a batch mode? ==
Anything that you wish to automate will involve a script. Cell Detection and most Extensions are recordable as a workflow steps which can then be made into scripts. These scripts can be run for new iamges or for the entire project (using **Run > Run for Project...** from the script editor window
== Can we measure inside the overlay that was sent from imageJ? ==
Annotations have few measurement computed automatically, and one must run Analyze > Calculate Features > ...
== Can we do pixel classification? ==
Pixel classification is currently experimental and on its waay to a new version of QuPath, but currently it is not supported.
== What other objects besides cells can be classified? ==
All **PathObjects** can be classified (that is, Annotations, Detections and Superpixels for now). Annotations can have their classes set using the GUI. Detections have their classes set by different tools but not via the GUI. It can be done via a script if necessary.
=== Example Script ===
``` lang= java
// Create a new class or select an existing class by name
def myClass = getPathClass('My Class')
// Get currently selected object and set its class
def currentObject = getSelectedObject()
currentObject.setPathClass(myClass)
```
== Can make an annotation out of a classification result? ==
== Ho do I set the resolution of an image manually? ==
== is there a way to make “close project” also close the currently open image? ==
== Is it possible to add measurements to regions from ImageJ? See this Blog Post ==
== How do we detect other things like fibers or differently shaped cells? ==
There is currently no implementation of tools that perform such analyses as this point