Phriction Projects Wikis Bioimaging And Optics Platform Image Processing ImageJ/Fiji ijs-Perkin Elmer Operetta CLS, Stitching And Export History Version 2 vs 3
Version 2 vs 3
Version 2 vs 3
Content Changes
Content Changes
We obtained a Perkin Elmer Opereta CLS mid 2017 to address medium-high throughput live imaging needs.
The system is fully integrated with its software and some work was needed to extract the images for use in ImageJ
= Installation =
You can download the `.groovy script from our GIT repository.
rOPERETTAIMPORT
You can add it to your `plugins > Scripts` folder or simply open it and run it directly in Fiji
== Dependencies ==
We make use of [[http://www.gpars.org/ | GPARS: A Concurrency & Parallelism Framework]] to take advantage of the maximum possible performance of our workstations.
But for that we need to make sure that the following libraries are located in the `jars` folder of Fiji/ImageJ
- `groovy-xml.jar` - Contains the XML parsing library we use to read the Operetta Metadata
- `gpars.jar` - Is the concurrency framework that allows us to have fun with parallel loops
- `jsr166y.jar` - Is the java library that GPARS depends on
We provide them here for convenience. If you want to make sure you have the latest versions, you can [[ http://groovy-lang.org/download.html | download and unzip Groovy ]] (Just get the binaries) and grab these 3 files from the `lib` folder.
{F4056413}
{F4056414}
{F4056415}
= Use =
Upon saving a series of images using the Export button
{F4055744, size=full}
We obtain a folder like so
{F4055762, size=full}
This inside the `Images` folder, we have all the wells, fields, channels, slices and timepoints as individual **compressed** tiffs
More importantly we have a `Index.idx.xml` that contains all the information on the acquisition
When you run the script, simply provide the location of the `Images` folder and the downsample factor you want (if any)
= Rationale Behind the Script =
== Missing Images ==
One major issue is that images that for whatever reason could not be acquired would not appear, so simple imports into ImageJ was not possible.
We needed a way to parse the data in order to know the real dimensions of the dataset that was acquired. This way we can create a clean hyperstack where missing images are at their correct locations
== Compressed Tiffs ==
The exported output contains tiffs that are lzip-compressed. For ImageJ, this means that there is a strong overhead to unzip the file before having access to its pixel data (150ms vs 10ms for opening an uncompressed tiff of the same dimensions). This prompted us to parallelize the opening of each series to maximize hard drive and processor bandwidth.
== Stitching ==
Users wanted to be able to view the stitched versions of their multiple fields in ImageJ, so this script assembles the images based on the stage coordinates given by the Operetta. As of yet, there is no smart stitching or computation done.
= Known Issues =
== Too many threads are being called ==
We are using a multi threaded approach to process as many wells in parallel. within each of these threads, we call upon more threads to handle each timepoint separately. This way, we can both make use of all the RAM (Wells in parallel) and the CPU to read the tiffs (Timepoints in parallel)
However, this creates more threads than it should. An issue was filed under https://github.com/GPars/GPars/issues/55
This is no problem per se, but your PC will be very slow until it finishes... Get in touch with the BIOP if you encounter issues.
Best
Oli
We obtained a Perkin Elmer Opereta CLS mid 2017 to address medium-high throughput live imaging needs.
The system is fully integrated with its software and some work was needed to extract the images for use in ImageJ
= Installation =
You can download the `.groovy script from our GIT repository.
rOPERETTAIMPORT
You can add it to your `plugins > Scripts` folder or simply open it and run it directly in Fiji
== Dependencies ==
We make use of [[http://www.gpars.org/ | GPARS: A Concurrency & Parallelism Framework]] to take advantage of the maximum possible performance of our workstations.
But for that we need to make sure that the following libraries are located in the `jars` folder of Fiji/ImageJ
- `groovy-xml.jar` - Contains the XML parsing library we use to read the Operetta Metadata
- `groovy-swing.jar` - Contains what is needed to create a GUI to select the wells to process
- `gpars.jar` - Is the concurrency framework that allows us to have fun with parallel loops
- `jsr166y.jar` - Is the java library that GPARS depends on
We provide them here for convenience. If you want to make sure you have the latest versions, you can [[ http://groovy-lang.org/download.html | download and unzip Groovy ]] (Just get the binaries) and grab these 3 files from the `lib` folder.
{F4070847}
{F4056414}
{F4056415}
{F4056413}
= Use =
Upon saving a series of images using the Export button
{F4055744, size=full}
We obtain a folder like so
{F4055762, size=full}
This inside the `Images` folder, we have all the wells, fields, channels, slices and timepoints as individual **compressed** tiffs
More importantly we have a `Index.idx.xml` that contains all the information on the acquisition
When you run the script, simply provide the location of the `Images` folder and the downsample factor you want (if any)
= Rationale Behind the Script =
== Missing Images ==
One major issue is that images that for whatever reason could not be acquired would not appear, so simple imports into ImageJ was not possible.
We needed a way to parse the data in order to know the real dimensions of the dataset that was acquired. This way we can create a clean hyperstack where missing images are at their correct locations
== Compressed Tiffs ==
The exported output contains tiffs that are lzip-compressed. For ImageJ, this means that there is a strong overhead to unzip the file before having access to its pixel data (150ms vs 10ms for opening an uncompressed tiff of the same dimensions). This prompted us to parallelize the opening of each series to maximize hard drive and processor bandwidth.
== Stitching ==
Users wanted to be able to view the stitched versions of their multiple fields in ImageJ, so this script assembles the images based on the stage coordinates given by the Operetta. As of yet, there is no smart stitching or computation done.
= Known Issues =
== Too many threads are being called ==
We are using a multi threaded approach to process as many wells in parallel. within each of these threads, we call upon more threads to handle each timepoint separately. This way, we can both make use of all the RAM (Wells in parallel) and the CPU to read the tiffs (Timepoints in parallel)
However, this creates more threads than it should. An issue was filed under https://github.com/GPars/GPars/issues/55
This is no problem per se, but your PC will be very slow until it finishes... Get in touch with the BIOP if you encounter issues.
Best
Oli
We obtained a Perkin Elmer Opereta CLS mid 2017 to address medium-high throughput live imaging needs.
The system is fully integrated with its software and some work was needed to extract the images for use in ImageJ
= Installation =
You can download the `.groovy script from our GIT repository.
rOPERETTAIMPORT
You can add it to your `plugins > Scripts` folder or simply open it and run it directly in Fiji
== Dependencies ==
We make use of [[http://www.gpars.org/ | GPARS: A Concurrency & Parallelism Framework]] to take advantage of the maximum possible performance of our workstations.
But for that we need to make sure that the following libraries are located in the `jars` folder of Fiji/ImageJ
- `groovy-xml.jar` - Contains the XML parsing library we use to read the Operetta Metadata
- `groovy-swing.jar` - Contains what is needed to create a GUI to select the wells to process
- `gpars.jar` - Is the concurrency framework that allows us to have fun with parallel loops
- `jsr166y.jar` - Is the java library that GPARS depends on
We provide them here for convenience. If you want to make sure you have the latest versions, you can [[ http://groovy-lang.org/download.html | download and unzip Groovy ]] (Just get the binaries) and grab these 3 files from the `lib` folder.
{F4056413}70847}
{F4056414}
{F4056415}
{F4056413}
= Use =
Upon saving a series of images using the Export button
{F4055744, size=full}
We obtain a folder like so
{F4055762, size=full}
This inside the `Images` folder, we have all the wells, fields, channels, slices and timepoints as individual **compressed** tiffs
More importantly we have a `Index.idx.xml` that contains all the information on the acquisition
When you run the script, simply provide the location of the `Images` folder and the downsample factor you want (if any)
= Rationale Behind the Script =
== Missing Images ==
One major issue is that images that for whatever reason could not be acquired would not appear, so simple imports into ImageJ was not possible.
We needed a way to parse the data in order to know the real dimensions of the dataset that was acquired. This way we can create a clean hyperstack where missing images are at their correct locations
== Compressed Tiffs ==
The exported output contains tiffs that are lzip-compressed. For ImageJ, this means that there is a strong overhead to unzip the file before having access to its pixel data (150ms vs 10ms for opening an uncompressed tiff of the same dimensions). This prompted us to parallelize the opening of each series to maximize hard drive and processor bandwidth.
== Stitching ==
Users wanted to be able to view the stitched versions of their multiple fields in ImageJ, so this script assembles the images based on the stage coordinates given by the Operetta. As of yet, there is no smart stitching or computation done.
= Known Issues =
== Too many threads are being called ==
We are using a multi threaded approach to process as many wells in parallel. within each of these threads, we call upon more threads to handle each timepoint separately. This way, we can both make use of all the RAM (Wells in parallel) and the CPU to read the tiffs (Timepoints in parallel)
However, this creates more threads than it should. An issue was filed under https://github.com/GPars/GPars/issues/55
This is no problem per se, but your PC will be very slow until it finishes... Get in touch with the BIOP if you encounter issues.
Best
Oli
c4science · Help