Phriction Projects Wikis Bioimaging And Optics Platform Computers & Servers at the BIOP Software GPU and Deep Learning Resources Python venv Setup for Deep Learning Resources History Version 17 vs 18
Version 17 vs 18
Version 17 vs 18
Content Changes
Content Changes
This page aims to contain the setup in order to be able to run the deep learning tools that are currently being used at the BIOP using a Python VENV.
= Why VENV? Why do you hate Conda, Oli? =
I do not hate Conda, it is very useful and powerful, but when it comes to scientific packages and unreleased GPU-based libraries, I find the "black box, it's OK, it's on Conda-Forge" approach lacking. With venv, the PC itself must be configured properly, which brings an understanding of the underlying dependencies that are installed implicitely with Conda. I am also comfortable doing theis because we are at a stage where things like having multiple CUDA versions and multiple Python versions is not harmful nor particularly difficult to handle.
= What this guide will show you =
First and foremost, this guide is intended for Windows 10. Linux has enough documentation that I luckily do not need to dive into, as we currently do not have any Linux based workstation at the facility.
**You will learn to**
1. Install and manage multiple versions of Python: 3.7, 3.8 and 3.9
2. Install the NVIDIA CUDA libraries for CUDA 10.0, 10.1, 10.2 and 11.2 on your PC as well as CuDNN for all these versions
3. Install the Microsoft Build Tools for C++ (Ideally just the bare minimum)
4. Install NodeJS (For Jupyer Lab)
= What can you install with this? =
This approach has enabled us to install all with GPU support:
# StarDist with `gputools`
# CSBDeep
# Noise2Void
# DenoiSeg
# CellPose with PyTorch GPU
# CellProfiler with Cellpose
= What can you not install with this =
# YAPIC: There is a dependency that is not available on Windows that makes this fail. But we are running it on our University's GPU nodes
= Installation of all dependencies =
NOTE: This setup is intended for a workstation with a good GPU and Windows 10 Installed. It has been tested with the following cards:
**RTX Titan, GTX 1080, GTX 1080 Ti, RTX 2080 Ti**
== Software to Install ==
- [[ https://www.nvidia.com/Download/index.aspx | Download Latest NVIDIA Drivers ]] - We use the `Studio Drivers`
- [[ https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019 |Download Build Tools for Visual Studio 2019 ]] - Install the `Build tools for C++`, namely
- `MSVC v142 - VS 2019 C++ x64/x86 build tools`
- `Windows 10 SDK (10.0.18362.0)`
- [[ https://developer.nvidia.com/cuda-toolkit-archive | Download CUDA Toolkits ]] Get the Installations for 10.0, 10.1, 10.2 and 11.2 - Install just the `Developer` and `Runtime` parts
{F14514741, width=300, layout=center}
- [[ https://developer.nvidia.com/rdp/cudnn-archive | Download CuDNN ]]
This [[https://www.tensorflow.org/install/source#gpu| CuDNN Compatibility chart]] lets you know what works and what does not work for TensorFlow, which is the most finnicky of the frameworks. PyTorch is much more lenient, it seems.
After you install each CUDA Toolkit, install the appropriate CuDNN. This is how I installed them and why
|CUDA Version|CuDNN Version|Why|
|---------------|------------------|-----|
|10.0 | 7.6.5 | TensorFlow 1.13 to 1.15|
|10.1 | 7.6.5 | TensorFlow 2.1.0 to 2.3.0|
|10.2 | 8.1.1 | [[https://pytorch.org/get-started/locally/| PyTorch 1.9 Stable]] for CellPose|
|11.2 | 8.1.1 | TensorFlow 2.5.0 to 2.6.0 (Tests) |
- [[https://www.python.org/downloads/release/python-379/| Install Python 3.7]]
- [[https://www.python.org/downloads/release/python-3810/| Install Python 3.8]]
- [[https://www.python.org/downloads/release/python-396/| Install Python 3.9]] - This one is necessary for CellProfiler, for example
- [[https://nodejs.org/en/download/|NodeJS]] - Install all defaults. This is for adding extensions to Jupyter Lab.
== Environment Variables ==
NOTE: Environment Variables are file and folder locations your operating system looks into when you call a program or library. It is like the index of a library, telling you where to look for different things.
In Windows, we can create and edit Environment Variables graphically.
# Start typing "Environment" in the Windows searchbar until it suggests "Edit the system environment variables".
# Click on {nav Environment Variables...} and use the {nav New...} button all the way down (The one that corresponds to the "System Variables") to create the two following environment variables.
|VARIABLE NAME | Path (Default) |
|-------------------|-----------------|
|INCLUDE| `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include`|
|LIB| `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\lib\x64`|
These are needed for compiling PyOpenCL, which is a dependency of `gpuools` for StarDist, but is beneficial for any other tools that use the GPU and may need compiling.
NOTE: You will need to click on {nav New...} for each Environment Variable you create.
WARNING: If you are compiling something else that isn't StarDist that needs a different version of CUDA, you NEED to change these paths during the compilation. After it is compiled, it will run no matter what these paths are. They are only needed at compiling. Did I mention compiling is the only time these environment variables are needed?
In the end it should look like this:
{F15073749, width=300, layout=center}
Note how there is also a CUDA 10.0 version installed. This is OK and will be made clear below if you install Noise2Void.
= Choosing Your Python Version =
Because you have multiple Python versions, you can check which are available with the `py --list` command.
```
C:\Users\demo>py --list
Installed Pythons found by py Launcher for Windows
-3.9-64 *
-3.8-64
-3.7-64
```
So if we need to create a `venv` for python 3.7, we can write
`py -3.7 -m venv d:\my-new-venv-py37`
When activating the environment using `d:\my-new-venv-py37\Scripts\activate`
We will have Python 3.7 backing it.
= Creating Virtual Environments =
NOTE: Consider upgrading `pip` to the latest vesion before starting: `python -m pip install --upgrade pip`
=== Windows Command Prompt `cmd` ===
IMPORTANT: We are running all the commands below from the **Windows Command Prompt**. Access it using {key Win R}, type `cmd` and hit enter.
=== StarDist with TensorFlow 1.15===
We are going to create a `stardist-tf15` environment in the `D:\environments` folder using python 3.7
From your command prompt:
```
d:
mkdir environments
py -3.7 -m venv environments\stardist-tf2
environments\stardist-tf15\Scripts\activate
```
(WARNING) **Checkpoint**: Ensure the right python version is installed using `where python` and you should have a result like below
```
D:\environments\stardist-tf15\Scripts\python.exe
C:\Users\oburri\AppData\Local\Programs\Python\Python37\python.exe
C:\Users\oburri\AppData\Local\Microsoft\WindowsApps\python.exe
```
Finally, we can install all of StarDist using the `stardist.txt` file below
`pip install -r stardist.txt`
{F15067175}
=== Noise2Void ===
Noise2Void currently needs a lower version of CUDA to function.
Installations
- [[ https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exenetwork | Download CUDA Toolkit 10.0 ]]
- [[ https://developer.nvidia.com/compute/machine-learning/cudnn/secure/7.6.5.32/Production/10.0_20191031/cudnn-10.0-windows10-x64-v7.6.5.32.zip | Download CuDNN 7.6.5 for CUDA 10.0]] - Unzip into `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\`
- [[https://www.python.org/downloads/release/python-379/| Install Python 3.7]] - We install it for all users and choose to add Python to the `PATH`
We are going to create a `n2v` environment in the `D:\environments` folder
Start a command prompt and type
```
d:
mkdir environments
python -m venv environments\n2v
environments\n2v\Scripts\activate
```
Finally, we can install all of Noise2Void using the `n2v.txt` file below
`pip install -r n2v.txt`
{F15067171}
=== CellPose ===
Installations:
- [[ https://developer.nvidia.com/cuda-10.1-download-archive-update2?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exenetwork | Download CUDA Toolkit 10.1 Update 2]] - Install just the `Developer` and `Runtime` parts
- [[ https://developer.nvidia.com/compute/machine-learning/cudnn/secure/7.6.5.32/Production/10.1_20191031/cudnn-10.1-windows10-x64-v7.6.5.32.zip | Download CuDNN 7.6.5 for CUDA 10.1]] - Unzip into `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\`
- [[https://www.python.org/downloads/release/python-379/| Install Python 3.7]] - We install it for all users and choose to add Python to the `PATH`
We are going to create a `cellpose` environment in the `D:\environments` folder
Start a command prompt and type
```
d:
mkdir environments
python -m venv environments\cellpose
environments\cellpose\Scripts\activate
```
Finally, we can install all of Cellpose using the `cellpose.txt` file below
```
pip install -r cellpose.txt
pip uninstall mxnet-mkl -y
pip install mxnet-cu101
```
{F15073960}
= Setup for SCITAS GPU Clusters =
Romain can tell us this :)
= 'virtualenv' Information & Examples =
IMPORTANT: Notice how we always use `Scripts\activate` **before **running any pip commands. The `activate` command ensures that we are running within the `virtualenv`. To confirm you are running in the virtualenv, its name should be in parentheses on the left of the command prompt.
{F15073659, width=700, layout=center}
IMPORTANT: `virtualenv` are unique to each machine (potentially user) and cannot be duplicated. It is not enough to copy the `stardist-tf` environment to another location or another computer to use it. This is by construction. File paths and file permissions are linked to the location the virtual environment was created in.
IMPORTANT: You should **not **have your Notebooks in the same folder as your `virtualenv`. A `virtualenv` is a folder which contains the libraries and executables to create an //environment// for you to run your code. It is independent. It would be like storing your Excel results in `C:\Program Files\Microscoft Office`.
== Example: Activating `stardist-tf` and running JupyterLab in your project folder ==
Suppose we have the following setup:
{F15073616, width=400, layout=center}
Note how the `My Project` folder is not even on the same disk as the `virtualenv`.
To start JupyterLab in your folder, do the following from the command prompt
```
d:\environments\stardist-tf2\Scripts\activate
cd /d "E:\My Project"
jupyter lab
```
= Create a shortcut to activate your `virtualenv` =
=== Example with Noise2Void ===
You can create a `Run Noise2Void.bat` file that you can keep somethere in your project folder that does this automatically. The syntax is a little different than above.
Copy paste the code below into the file you just created. Adjust paths as needed.
```
call d:\environments\n2v\scripts\activate
cd /d "E:\My Project"
call jupyter lab
```
=== Example with StarDist===
You can create a `Run StarDist-TF2.bat` file that you can keep somethere in your project folder that does this automatically.
Copy paste the code below into the file you just created. Adjust paths as needed.
```
call d:\environments\stardist-tf2\scripts\activate
cd /d "E:\My Project"
call jupyter lab
```
This page aims to contain the setup in order to be able to run the deep learning tools that are currently being used at the BIOP using a Python VENV.
= Why VENV? Why do you hate Conda, Oli? =
I do not hate Conda, it is very useful and powerful, but when it comes to scientific packages and unreleased GPU-based libraries, I find the "black box, it's OK, it's on Conda-Forge" approach lacking. With venv, the PC itself must be configured properly, which brings an understanding of the underlying dependencies that are installed implicitely with Conda. I am also comfortable doing theis because we are at a stage where things like having multiple CUDA versions and multiple Python versions is not harmful nor particularly difficult to handle.
= What this guide will show you =
First and foremost, this guide is intended for Windows 10. Linux has enough documentation that I luckily do not need to dive into, as we currently do not have any Linux based workstation at the facility.
**You will learn to**
1. Install and manage multiple versions of Python: 3.7, 3.8 and 3.9
2. Install the NVIDIA CUDA libraries for CUDA 10.0, 10.1, 10.2 and 11.2 on your PC as well as CuDNN for all these versions
3. Install the Microsoft Build Tools for C++ (Ideally just the bare minimum)
4. Install NodeJS (For Jupyer Lab)
= What can you install with this? =
This approach has enabled us to install all with GPU support:
# StarDist with `gputools`
# CSBDeep
# Noise2Void
# DenoiSeg
# CellPose with PyTorch GPU
# CellProfiler with Cellpose
= What can you not install with this =
# YAPIC: There is a dependency that is not available on Windows that makes this fail. But we are running it on our University's GPU nodes
= Installation of all dependencies =
NOTE: This setup is intended for a workstation with a good GPU and Windows 10 Installed. It has been tested with the following cards:
**RTX Titan, GTX 1080, GTX 1080 Ti, RTX 2080 Ti**
== Software to Install ==
- [[ https://www.nvidia.com/Download/index.aspx | Download Latest NVIDIA Drivers ]] - We use the `Studio Drivers`
- [[ https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019 |Download Build Tools for Visual Studio 2019 ]] - Install the `Build tools for C++`, namely
- `MSVC v142 - VS 2019 C++ x64/x86 build tools`
- `Windows 10 SDK (10.0.18362.0)`
- [[ https://developer.nvidia.com/cuda-toolkit-archive | Download CUDA Toolkits ]] Get the Installations for 10.0, 10.1, 10.2 and 11.2 - Install just the `Developer` and `Runtime` parts
{F14514741, width=300, layout=center}
- [[ https://developer.nvidia.com/rdp/cudnn-archive | Download CuDNN ]]
This [[https://www.tensorflow.org/install/source#gpu| CuDNN Compatibility chart]] lets you know what works and what does not work for TensorFlow, which is the most finnicky of the frameworks. PyTorch is much more lenient, it seems.
After you install each CUDA Toolkit, install the appropriate CuDNN. This is how I installed them and why
|CUDA Version|CuDNN Version|Why|
|---------------|------------------|-----|
|10.0 | 7.6.5 | TensorFlow 1.13 to 1.15|
|10.1 | 7.6.5 | TensorFlow 2.1.0 to 2.3.0|
|10.2 | 8.1.1 | [[https://pytorch.org/get-started/locally/| PyTorch 1.9 Stable]] for CellPose|
|11.2 | 8.1.1 | TensorFlow 2.5.0 to 2.6.0 (Tests) |
- [[https://www.python.org/downloads/release/python-379/| Install Python 3.7]]
- [[https://www.python.org/downloads/release/python-3810/| Install Python 3.8]]
- [[https://www.python.org/downloads/release/python-396/| Install Python 3.9]] - This one is necessary for CellProfiler, for example
- [[https://nodejs.org/en/download/|NodeJS]] - Install all defaults. This is for adding extensions to Jupyter Lab.
== Environment Variables ==
NOTE: Environment Variables are file and folder locations your operating system looks into when you call a program or library. It is like the index of a library, telling you where to look for different things.
In Windows, we can create and edit Environment Variables graphically.
# Start typing "Environment" in the Windows searchbar until it suggests "Edit the system environment variables".
# Click on {nav Environment Variables...} and use the {nav New...} button all the way down (The one that corresponds to the "System Variables") to create the two following environment variables.
|VARIABLE NAME | Path (Default) |
|-------------------|-----------------|
|INCLUDE| `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include`|
|LIB| `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\lib\x64`|
These are needed for compiling PyOpenCL, which is a dependency of `gpuools` for StarDist, but is beneficial for any other tools that use the GPU and may need compiling.
NOTE: You will need to click on {nav New...} for each Environment Variable you create.
WARNING: If you are compiling something else that isn't StarDist that needs a different version of CUDA, you NEED to change these paths during the compilation. After it is compiled, it will run no matter what these paths are. They are only needed at compiling. Did I mention compiling is the only time these environment variables are needed?
In the end it should look like this:
{F15073749, width=300, layout=center}
Note how there is also a CUDA 10.0 version installed. This is OK and will be made clear below if you install Noise2Void.
= Choosing Your Python Version =
Because you have multiple Python versions, you can check which are available with the `py --list` command.
```
C:\Users\demo>py --list
Installed Pythons found by py Launcher for Windows
-3.9-64 *
-3.8-64
-3.7-64
```
So if we need to create a `venv` for python 3.7, we can write
`py -3.7 -m venv d:\my-new-venv-py37`
When activating the environment using `d:\my-new-venv-py37\Scripts\activate`
We will have Python 3.7 backing it.
= Creating Virtual Environments =
NOTE: Consider upgrading `pip` to the latest vesion before starting: `python -m pip install --upgrade pip`
=== Windows Command Prompt `cmd` ===
IMPORTANT: We are running all the commands below from the **Windows Command Prompt**. Access it using {key Win R}, type `cmd` and hit enter.
=== StarDist with TensorFlow 1.15===
We are going to create a `env-stardist-tf15` environment in the `D:\` drive using python 3.7
From your command prompt:
```
d:
mkdir environments
py -3.7 -m venv env-stardist-tf15
env-stardist-tf15\Scripts\activate
```
(WARNING) **Checkpoint**: Ensure the right python version is installed using `where python` and you should have a result like below
```
D:\environments\env-stardist-tf15\Scripts\python.exe
C:\Users\oburri\AppData\Local\Programs\Python\Python37\python.exe
C:\Users\oburri\AppData\Local\Microsoft\WindowsApps\python.exe
```
Finally, we can install all of StarDist using the `stardist.txt` file below
`pip install -r stardist.txt`
{F20639458}
=== Noise2Void ===
Noise2Void currently needs a lower version of CUDA to function.
Installations
- [[ https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exenetwork | Download CUDA Toolkit 10.0 ]]
- [[ https://developer.nvidia.com/compute/machine-learning/cudnn/secure/7.6.5.32/Production/10.0_20191031/cudnn-10.0-windows10-x64-v7.6.5.32.zip | Download CuDNN 7.6.5 for CUDA 10.0]] - Unzip into `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\`
- [[https://www.python.org/downloads/release/python-379/| Install Python 3.7]] - We install it for all users and choose to add Python to the `PATH`
We are going to create a `env-noise2void-tf15` environment in the `D:\` drive using Python 3.7
Start a command prompt and type
```
d:
py -3.7 -m venv env-noise2void-tf15
env-noise2void-tf15\Scripts\activate
```
Finally, we can install all of Noise2Void using the `n2v.txt` file below
`pip install -r n2v.txt`
{F15067171}
=== CellPose ===
Installations:
- [[ https://developer.nvidia.com/cuda-10.1-download-archive-update2?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exenetwork | Download CUDA Toolkit 10.1 Update 2]] - Install just the `Developer` and `Runtime` parts
- [[ https://developer.nvidia.com/compute/machine-learning/cudnn/secure/7.6.5.32/Production/10.1_20191031/cudnn-10.1-windows10-x64-v7.6.5.32.zip | Download CuDNN 7.6.5 for CUDA 10.1]] - Unzip into `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\`
- [[https://www.python.org/downloads/release/python-379/| Install Python 3.7]] - We install it for all users and choose to add Python to the `PATH`
We are going to create a `cellpose` environment in the `D:\environments` folder
Start a command prompt and type
```
d:
mkdir environments
python -m venv environments\cellpose
environments\cellpose\Scripts\activate
```
Finally, we can install all of Cellpose using the `cellpose.txt` file below
```
pip install -r cellpose.txt
pip uninstall mxnet-mkl -y
pip install mxnet-cu101
```
{F15073960}
= Setup for SCITAS GPU Clusters =
Romain can tell us this :)
= 'virtualenv' Information & Examples =
IMPORTANT: Notice how we always use `Scripts\activate` **before **running any pip commands. The `activate` command ensures that we are running within the `virtualenv`. To confirm you are running in the virtualenv, its name should be in parentheses on the left of the command prompt.
{F15073659, width=700, layout=center}
IMPORTANT: `virtualenv` are unique to each machine (potentially user) and cannot be duplicated. It is not enough to copy the `stardist-tf` environment to another location or another computer to use it. This is by construction. File paths and file permissions are linked to the location the virtual environment was created in.
IMPORTANT: You should **not **have your Notebooks in the same folder as your `virtualenv`. A `virtualenv` is a folder which contains the libraries and executables to create an //environment// for you to run your code. It is independent. It would be like storing your Excel results in `C:\Program Files\Microscoft Office`.
== Example: Activating `stardist-tf` and running JupyterLab in your project folder ==
Suppose we have the following setup:
{F15073616, width=400, layout=center}
Note how the `My Project` folder is not even on the same disk as the `virtualenv`.
To start JupyterLab in your folder, do the following from the command prompt
```
d:\environments\stardist-tf2\Scripts\activate
cd /d "E:\My Project"
jupyter lab
```
= Create a shortcut to activate your `virtualenv` =
=== Example with Noise2Void ===
You can create a `Run Noise2Void.bat` file that you can keep somethere in your project folder that does this automatically. The syntax is a little different than above.
Copy paste the code below into the file you just created. Adjust paths as needed.
```
call d:\environments\n2v\scripts\activate
cd /d "E:\My Project"
call jupyter lab
```
=== Example with StarDist===
You can create a `Run StarDist-TF2.bat` file that you can keep somethere in your project folder that does this automatically.
Copy paste the code below into the file you just created. Adjust paths as needed.
```
call d:\environments\stardist-tf2\scripts\activate
cd /d "E:\My Project"
call jupyter lab
```
This page aims to contain the setup in order to be able to run the deep learning tools that are currently being used at the BIOP using a Python VENV.
= Why VENV? Why do you hate Conda, Oli? =
I do not hate Conda, it is very useful and powerful, but when it comes to scientific packages and unreleased GPU-based libraries, I find the "black box, it's OK, it's on Conda-Forge" approach lacking. With venv, the PC itself must be configured properly, which brings an understanding of the underlying dependencies that are installed implicitely with Conda. I am also comfortable doing theis because we are at a stage where things like having multiple CUDA versions and multiple Python versions is not harmful nor particularly difficult to handle.
= What this guide will show you =
First and foremost, this guide is intended for Windows 10. Linux has enough documentation that I luckily do not need to dive into, as we currently do not have any Linux based workstation at the facility.
**You will learn to**
1. Install and manage multiple versions of Python: 3.7, 3.8 and 3.9
2. Install the NVIDIA CUDA libraries for CUDA 10.0, 10.1, 10.2 and 11.2 on your PC as well as CuDNN for all these versions
3. Install the Microsoft Build Tools for C++ (Ideally just the bare minimum)
4. Install NodeJS (For Jupyer Lab)
= What can you install with this? =
This approach has enabled us to install all with GPU support:
# StarDist with `gputools`
# CSBDeep
# Noise2Void
# DenoiSeg
# CellPose with PyTorch GPU
# CellProfiler with Cellpose
= What can you not install with this =
# YAPIC: There is a dependency that is not available on Windows that makes this fail. But we are running it on our University's GPU nodes
= Installation of all dependencies =
NOTE: This setup is intended for a workstation with a good GPU and Windows 10 Installed. It has been tested with the following cards:
**RTX Titan, GTX 1080, GTX 1080 Ti, RTX 2080 Ti**
== Software to Install ==
- [[ https://www.nvidia.com/Download/index.aspx | Download Latest NVIDIA Drivers ]] - We use the `Studio Drivers`
- [[ https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019 |Download Build Tools for Visual Studio 2019 ]] - Install the `Build tools for C++`, namely
- `MSVC v142 - VS 2019 C++ x64/x86 build tools`
- `Windows 10 SDK (10.0.18362.0)`
- [[ https://developer.nvidia.com/cuda-toolkit-archive | Download CUDA Toolkits ]] Get the Installations for 10.0, 10.1, 10.2 and 11.2 - Install just the `Developer` and `Runtime` parts
{F14514741, width=300, layout=center}
- [[ https://developer.nvidia.com/rdp/cudnn-archive | Download CuDNN ]]
This [[https://www.tensorflow.org/install/source#gpu| CuDNN Compatibility chart]] lets you know what works and what does not work for TensorFlow, which is the most finnicky of the frameworks. PyTorch is much more lenient, it seems.
After you install each CUDA Toolkit, install the appropriate CuDNN. This is how I installed them and why
|CUDA Version|CuDNN Version|Why|
|---------------|------------------|-----|
|10.0 | 7.6.5 | TensorFlow 1.13 to 1.15|
|10.1 | 7.6.5 | TensorFlow 2.1.0 to 2.3.0|
|10.2 | 8.1.1 | [[https://pytorch.org/get-started/locally/| PyTorch 1.9 Stable]] for CellPose|
|11.2 | 8.1.1 | TensorFlow 2.5.0 to 2.6.0 (Tests) |
- [[https://www.python.org/downloads/release/python-379/| Install Python 3.7]]
- [[https://www.python.org/downloads/release/python-3810/| Install Python 3.8]]
- [[https://www.python.org/downloads/release/python-396/| Install Python 3.9]] - This one is necessary for CellProfiler, for example
- [[https://nodejs.org/en/download/|NodeJS]] - Install all defaults. This is for adding extensions to Jupyter Lab.
== Environment Variables ==
NOTE: Environment Variables are file and folder locations your operating system looks into when you call a program or library. It is like the index of a library, telling you where to look for different things.
In Windows, we can create and edit Environment Variables graphically.
# Start typing "Environment" in the Windows searchbar until it suggests "Edit the system environment variables".
# Click on {nav Environment Variables...} and use the {nav New...} button all the way down (The one that corresponds to the "System Variables") to create the two following environment variables.
|VARIABLE NAME | Path (Default) |
|-------------------|-----------------|
|INCLUDE| `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include`|
|LIB| `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\lib\x64`|
These are needed for compiling PyOpenCL, which is a dependency of `gpuools` for StarDist, but is beneficial for any other tools that use the GPU and may need compiling.
NOTE: You will need to click on {nav New...} for each Environment Variable you create.
WARNING: If you are compiling something else that isn't StarDist that needs a different version of CUDA, you NEED to change these paths during the compilation. After it is compiled, it will run no matter what these paths are. They are only needed at compiling. Did I mention compiling is the only time these environment variables are needed?
In the end it should look like this:
{F15073749, width=300, layout=center}
Note how there is also a CUDA 10.0 version installed. This is OK and will be made clear below if you install Noise2Void.
= Choosing Your Python Version =
Because you have multiple Python versions, you can check which are available with the `py --list` command.
```
C:\Users\demo>py --list
Installed Pythons found by py Launcher for Windows
-3.9-64 *
-3.8-64
-3.7-64
```
So if we need to create a `venv` for python 3.7, we can write
`py -3.7 -m venv d:\my-new-venv-py37`
When activating the environment using `d:\my-new-venv-py37\Scripts\activate`
We will have Python 3.7 backing it.
= Creating Virtual Environments =
NOTE: Consider upgrading `pip` to the latest vesion before starting: `python -m pip install --upgrade pip`
=== Windows Command Prompt `cmd` ===
IMPORTANT: We are running all the commands below from the **Windows Command Prompt**. Access it using {key Win R}, type `cmd` and hit enter.
=== StarDist with TensorFlow 1.15===
We are going to create a ``env-stardist-tf15` environment in the `D:\environments` folder` drive using python 3.7
From your command prompt:
```
d:
mkdir environments
py -3.7 -m venv environments\-stardist-tf215
environments\env-stardist-tf15\Scripts\activate
```
(WARNING) **Checkpoint**: Ensure the right python version is installed using `where python` and you should have a result like below
```
D:\environments\env-stardist-tf15\Scripts\python.exe
C:\Users\oburri\AppData\Local\Programs\Python\Python37\python.exe
C:\Users\oburri\AppData\Local\Microsoft\WindowsApps\python.exe
```
Finally, we can install all of StarDist using the `stardist.txt` file below
`pip install -r stardist.txt`
{F15067175}{F20639458}
=== Noise2Void ===
Noise2Void currently needs a lower version of CUDA to function.
Installations
- [[ https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exenetwork | Download CUDA Toolkit 10.0 ]]
- [[ https://developer.nvidia.com/compute/machine-learning/cudnn/secure/7.6.5.32/Production/10.0_20191031/cudnn-10.0-windows10-x64-v7.6.5.32.zip | Download CuDNN 7.6.5 for CUDA 10.0]] - Unzip into `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\`
- [[https://www.python.org/downloads/release/python-379/| Install Python 3.7]] - We install it for all users and choose to add Python to the `PATH`
We are going to create a `n2v`env-noise2void-tf15` environment in the `D:\environments` folder` drive using Python 3.7
Start a command prompt and type
```
d:
mkdir environments
pythonpy -3.7 -m venv environments\n2v-noise2void-tf15
environments\n2venv-noise2void-tf15\Scripts\activate
```
Finally, we can install all of Noise2Void using the `n2v.txt` file below
`pip install -r n2v.txt`
{F15067171}
=== CellPose ===
Installations:
- [[ https://developer.nvidia.com/cuda-10.1-download-archive-update2?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exenetwork | Download CUDA Toolkit 10.1 Update 2]] - Install just the `Developer` and `Runtime` parts
- [[ https://developer.nvidia.com/compute/machine-learning/cudnn/secure/7.6.5.32/Production/10.1_20191031/cudnn-10.1-windows10-x64-v7.6.5.32.zip | Download CuDNN 7.6.5 for CUDA 10.1]] - Unzip into `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\`
- [[https://www.python.org/downloads/release/python-379/| Install Python 3.7]] - We install it for all users and choose to add Python to the `PATH`
We are going to create a `cellpose` environment in the `D:\environments` folder
Start a command prompt and type
```
d:
mkdir environments
python -m venv environments\cellpose
environments\cellpose\Scripts\activate
```
Finally, we can install all of Cellpose using the `cellpose.txt` file below
```
pip install -r cellpose.txt
pip uninstall mxnet-mkl -y
pip install mxnet-cu101
```
{F15073960}
= Setup for SCITAS GPU Clusters =
Romain can tell us this :)
= 'virtualenv' Information & Examples =
IMPORTANT: Notice how we always use `Scripts\activate` **before **running any pip commands. The `activate` command ensures that we are running within the `virtualenv`. To confirm you are running in the virtualenv, its name should be in parentheses on the left of the command prompt.
{F15073659, width=700, layout=center}
IMPORTANT: `virtualenv` are unique to each machine (potentially user) and cannot be duplicated. It is not enough to copy the `stardist-tf` environment to another location or another computer to use it. This is by construction. File paths and file permissions are linked to the location the virtual environment was created in.
IMPORTANT: You should **not **have your Notebooks in the same folder as your `virtualenv`. A `virtualenv` is a folder which contains the libraries and executables to create an //environment// for you to run your code. It is independent. It would be like storing your Excel results in `C:\Program Files\Microscoft Office`.
== Example: Activating `stardist-tf` and running JupyterLab in your project folder ==
Suppose we have the following setup:
{F15073616, width=400, layout=center}
Note how the `My Project` folder is not even on the same disk as the `virtualenv`.
To start JupyterLab in your folder, do the following from the command prompt
```
d:\environments\stardist-tf2\Scripts\activate
cd /d "E:\My Project"
jupyter lab
```
= Create a shortcut to activate your `virtualenv` =
=== Example with Noise2Void ===
You can create a `Run Noise2Void.bat` file that you can keep somethere in your project folder that does this automatically. The syntax is a little different than above.
Copy paste the code below into the file you just created. Adjust paths as needed.
```
call d:\environments\n2v\scripts\activate
cd /d "E:\My Project"
call jupyter lab
```
=== Example with StarDist===
You can create a `Run StarDist-TF2.bat` file that you can keep somethere in your project folder that does this automatically.
Copy paste the code below into the file you just created. Adjust paths as needed.
```
call d:\environments\stardist-tf2\scripts\activate
cd /d "E:\My Project"
call jupyter lab
```
c4science · Help