How to prevent tensorflow from allocating the totality of a GPU memory ?
When you develop a tf.Session by passing a tf.GPUOptions as a component of the optional config argument you can set the portion of GPU memory to be allocated:
# Assume that you have 12GB of GPU memory and want to allocate ~4GB: gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
per_process_gpu_memory_fraction act as a hard upper bound on the amount of GPU memory that will be utilized by the process on every GPU on the same machine. And then, this fraction is applied uniformly to all of the GPUs on the same machine and there is no way to set this on a per-GPU basis.
Some time it is difficult for the process to allocate a subset of the available memory, and to grow the memory usage as it is needed by the process. TensorFlow provides two configuration options on the session to control this.
config = tf.ConfigProto() config.gpu_options.allow_growth = True session = tf.Session(config=config, ...)
Another method is per_process_gpu_memory_fraction option, This determines the fraction of the all amount of memory that each visible GPU should be allocated.
Allocate fixed memory:
config = tf.ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = 0.4 session = tf.Session(config=config, ...)
This is used to allocate only 40% of total memory.