What is logits, softmax and softmax_cross_entropy_with_logits ?
Assume you have two tensors, where y_hat contains registered scores for each class (for instance, from y = W*x +b) and y_true contains one-hot encoded true labels.
y_hat = ... # Predicted label, e.g. y = tf.matmul(X, W) + b y_true = ... # True label, one-hot encoded
In the event that you translate the scores in y_hat as unnormalized log probabilities, at that point they are logits.
Also, the total cross-entropy loss processed in this manner:
y_hat_softmax = tf.nn.softmax(y_hat) total_loss = tf.reduce_mean(-tf.reduce_sum(y_true * tf.log(y_hat_softmax), ))
Is basically comparable to the total cross-entropy loss processed with the function softmax_cross_entropy_with_logits():
total_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_hat, y_true))