# How to average summaries over multiple batches ?

How to average summaries over multiple batches ?

Asked on December 19, 2018 in

First create a new Summary object. and do the averaging of measure in Python.

Example:

```accuracies = []

# Calculate your measure over as many batches as you need
for batch in validation_set:
accuracies.append(sess.run([training_op]))

# Take the mean of you measure
accuracy = np.mean(accuracies)

# Create a new Summary object with your measure
summary = tf.Summary()

# Add it to the Tensorboard summary writer
# Make sure to specify a step parameter to get nice graphs over time
```

Let’s try this code:

```ema = tf.train.ExponentialMovingAverage(decay=my_decay_value, zero_debias=True)
maintain_ema_op = ema.apply(your_losses_list)

# Create an op that will update the moving averages after each training step.
with tf.control_dependencies([your_original_train_op]):
train_op = tf.group(maintain_ema_op)
```

And then use this:

```sess.run(train_op)
```

This call maintain_ema_op because it is specify as a control dependency.

To get an exponential moving averages use the below code:

```moving_average = ema.average(an_item_from_your_losses_list_above)
```

Then retrieve its value by the following:

```value = sess.run(moving_average)
```

Compute the moving average within the calculation graph.

In Tensorflow, Use streaming metrics. this had an update function to feed the information of current batch and had a function to get the averaged summary.

Example:

```accuracy = ...
streaming_accuracy, streaming_accuracy_update = tf.contrib.metrics.streaming_mean(accuracy)
streaming_accuracy_scalar = tf.summary.scalar('streaming_accuracy', streaming_accuracy)

# set up your session etc.

for i in iterations:
for b in batches:
sess.run([streaming_accuracy_update], feed_dict={...})

streaming_summ = sess.run(streaming_accuracy_scalar)