I wanna draw the weights of tf.layers.dense in tensorboard histogram, but it not show in the parameter, how could I do that?

2Welcome to Stackoverflow! Please take some time to read the help pages, especially the sections named "What topics can I ask about here?" and "What types of questions should I avoid asking?". Also please take the tour and read about how to ask good questions. Lastly please learn how to create a Minimal, Complete, and Verifiable Example.– MarkusJul 28 '17 at 11:35
The weights are added as a variable named kernel
, so you could use
x = tf.dense(...)
weights = tf.get_default_graph().get_tensor_by_name(
os.path.split(x.name)[0] + '/kernel:0')
You can obviously replace tf.get_default_graph()
by any other graph you are working in.

11And to get biases just use
bias = tf.get_default_graph().get_tensor_by_name( os.path.split(x.name)[0] + '/bias:0')
Sep 21 '17 at 14:07 
1It makes a lot more sense to add kernel and bias as a property of x itself! Aug 3 '18 at 23:35
I came across this problem and just solved it. tf.layers.dense
's name is not necessary to be the same with the kernel's name's prefix. My tensor is "dense_2/xxx" but it's kernel is "dense_1/kernel:0". To ensure that tf.get_variable
works, you'd better set the name=xxx
in the tf.layers.dense
function to make two names owning same prefix. It works as the demo below:
l=tf.layers.dense(input_tf_xxx,300,name='ip1')
with tf.variable_scope('ip1', reuse=True):
w = tf.get_variable('kernel')
By the way, my tf version is 1.3.
The latest tensorflow layers api creates all the variables using the tf.get_variable
call. This ensures that if you wish to use the variable again, you can just use the tf.get_variable
function and provide the name of the variable that you wish to obtain.
In the case of a tf.layers.dense
, the variable is created as: layer_name/kernel
. So, you can obtain the variable by saying:
with tf.variable_scope("layer_name", reuse=True):
weights = tf.get_variable("kernel") # do not specify
# the shape here or it will confuse tensorflow into creating a new one.
[Edit]: The new version of Tensorflow now has both Functional and ObjectOriented interfaces to the layers api. If you need the layers only for computational purposes, then using the functional api is a good choice. The function names start with small letters for instance > tf.layers.dense(...)
. The Layer Objects can be created using capital first letters e.g. > tf.layers.Dense(...)
. Once you have a handle to this layer object, you can use all of its functionality. For obtaining the weights, just use obj.trainable_weights
this returns a list of all the trainable variables found in that layer's scope.
I am going crazy with tensorflow.
I run this:
sess.run(x.kernel)
after training, and I get the weights.
Comes from the properties described here.
I am saying that I am going crazy because it seems that there are a million slightly different ways to do something in tf, and that fragments the tutorials around.
Is there anything wrong with
model.get_weights()
After I create a model, compile it and run fit, this function returns a numpy array of the weights for me.
In TF 2 if you're inside a @tf.function (graph mode):
weights = optimizer.weights
If you're in eager mode (default in TF2 except in @tf.function decorated functions):
weights = optimizer.get_weights()
in TF2 weights will output a list in length 2
weights_out[0] = kernel weight
weights_out[1] = bias weight
the second layer weight (layer[0] is the input layer with no weights) in a model in size: 50 with input size: 784
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(50, activation="relu", name="dense_1")(inputs)
x = layers.Dense(50, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(...)
model.fit(...)
kernel_weight = model.layers[1].weights[0]
bias_weight = model.layers[1].weights[1]
all_weight = model.layers[1].weights
print(len(all_weight)) # 2
print(kernel_weight.shape) # (784,50)
print(bias_weight.shape) # (50,)
Try to make a loop for getting the weight of each layer in your sequential network by printing the name of the layer first which you can get from:
model.summary()
Then u can get the weight of each layer running this code:
for layer in model.layers:
print(layer.name)
print(layer.get_weights())