So I recently wiped my system and upgraded to Linux Mint Cinnamon 20. I tend to wipe and install on major releases since I do a lot of customization.
Anyway, I wanted to set CUDA back up along with tensorflow-gpu since I have stuff I wanted to do. I recreated my virtual environment and found Tensorflow 2.2.0 had been released. Based on this I found it still needs CUDA 10.1. No worries, went through and put CUDA 10.1, cuDNN, and TensorRT back on my system and everything was working.
I noticed with 2.2.0 that I was getting the dreaded RTX CUDA_ERROR_OUT_OF_MEMORY errors for pretty much anything I did. So I fixed it and figured I’d post this in case it helps anyone else out down the road. You need to add this in so that the GPU memory can grow and use mixed precision with the RTX (which also helps to run things on the TPUs in the RTX series).
from tensorflow import config as tfc from tensorflow.keras.mixed_precision import experimental as mixed_precision ... gpus = tfc.experimental.list_physical_devices("GPU") tfc.experimental.set_memory_growth(gpus, True) policy = mixed_precision.Policy('mixed_float16') mixed_precision.set_policy(policy)
If you’re having more out of memory errors on your RTX, give this a shot. You can read more about Tensorflow and mixed precision here.