What is the difference of name scope and a variable scope in tensorflow ?
We should start by a short introduction to variable sharing. It is an mechanism in TensorFlow that takes into consideration sharing variables got to in various parts of the code without passing references to the variable around.
tf.get_variable can be utilized with the name of the variable as the argument to either make another variable with such name or recover the one that was made previously. This is not quite the same as utilizing the tf.Variable constructor which will make another variable each time it is called (and possibly add a suffix to the variable name if a variable with such name as of now exists).
It is with the goal of the variable sharing mechanism that a different kind of scope (variable degree) was presented.
Thus, we wind up having two unique types of scopes:
- name scope, made utilizing tf.name_scope
- variable scope, made utilizing tf.variable_scope
The two scope have the same effect on all activities from well as variable made utilizing tf.Variable, i.e., the scope will be added as a prefix to the task or variable name.
Hence, name scope is ignored by tf.get_variable. We can see that in the bellow example:
with tf.name_scope("my_scope"): v1 = tf.get_variable("var1", , dtype=tf.float32) v2 = tf.Variable(1, name="var2", dtype=tf.float32) a = tf.add(v1, v2) print(v1.name) # var1:0 print(v2.name) # my_scope/var2:0 print(a.name) # my_scope/Add:0
Use tf.get_variable in a variable scope, as in the bellow example:
with tf.variable_scope("my_scope"): v1 = tf.get_variable("var1", , dtype=tf.float32) v2 = tf.Variable(1, name="var2", dtype=tf.float32) a = tf.add(v1, v2) print(v1.name) # my_scope/var1:0 print(v2.name) # my_scope/var2:0 print(a.name) # my_scope/Add:0
This enables us to effortlessly share variables across various parts of the program, even inside various name scopes:
with tf.name_scope("foo"): with tf.variable_scope("var_scope"): v = tf.get_variable("var", ) with tf.name_scope("bar"): with tf.variable_scope("var_scope", reuse=True): v1 = tf.get_variable("var", ) assert v1 == v print(v.name) # var_scope/var:0 print(v1.name) # var_scope/var:0
The Namespaces are used to sort out names for variables and operators in an hierarchical manner, For example: “scopeA/scopeB/scopeC/op1“
- tf.name_scope makes namespace for operators in the default graph.
- tf.variable_scope makes namespace for both variables and operators in the default graph.
- tf.op_scope like tf.name_scope, but for the graph in which specified variables were made.
- tf.variable_op_scope like tf.variable_scope, but for the graph in which specified variables were made.
Which shows all types of scopes define namespaces for both variables and operators with following differences:
- tf.variable_op_scope or tf.variable_scope are good with tf.get_variable
- tf.op_scope and tf.variable_op_scope simply select a graph from a rundown of determined variables to make a scope for. Other than their behaviour equivalent to tf.name_scope and tf.variable_scope likewise
- tf.variable_scope and variable_op_scope include indicated or default initializer.
With Respect to API r0.11, op_scope and variable_op_scope both are belittled.
The name_scope and variable_scope can be settled:
with tf.name_scope('ns'): with tf.variable_scope('vs'): #scope creation v1 = tf.get_variable("v1",[1.0]) #v1.name = 'vs/v1:0' v2 = tf.Variable([2.0],name = 'v2') #v2.name= 'ns/vs/v2:0' v3 = v1 + v2 #v3.name = 'ns/vs/add:0'