So I am running a CNN for a classification problem. I have 3 conv layers with 3 pooling layers. P3 is the output of the last pooling layer, whose dimensions are: [Batch_size, 4, 12, 48]_, and I want to flatten that matrix into a [Batch_size, 2304] size matrix, being 2304 = 4*12*48. I had been working with "Option A" (see below) for a while, but one day I wanted to try out "Option B", which would theoretically give me the same result. However, it did not. I have cheked the following thread before
Is tf.contrib.layers.flatten(x) the same as tf.reshape(x, [n, 1])?
but that just added more confusion, since trying "Option C" (taken from the aforementioned thread) gave a new different result.
P3 = tf.nn.max_pool(A3, ksize = [1, 2, 2, 1], strides = [1, 2, 2, 1], padding='VALID')
P3_shape = P3.get_shape().as_list()
P = tf.contrib.layers.flatten(P3) <-----Option A
P = tf.reshape(P3, [-1, P3_shape[1]*P3_shape[2]*P3_shape[3]]) <---- Option B
P = tf.reshape(P3, [tf.shape(P3)[0], -1]) <---- Option C
I am more inclined to go with "Option B" since that is the one I have seen in a video by Dandelion Mane (https://www.youtube.com/watch?v=eBbEDRsCmv4&t=631s), but I would like to understand why these 3 options are giving different results.
Thanks for any help!