Since my data is too large, I use pd.read_csv('',chunksize=). I am using categorical_crossentropy as my loss function, however, on the last chunk, I have just one target because of chunking. So I get the error:
You are passing a target array of shape (2110, 1) while using as loss categorical_crossentropy.
What I have tried:
Now I know I can use binary_crossentropy for that last chunk. So this is what I did:
X_train, X_test, y_train, y_test = train_test_split(train_data, train_labels, shuffle=True, test_size=0.3)
if y_train.shape[1] == 1:
loss = 'binary_crossentropy'
else:
loss = 'categorical_crossentropy'
When I do this, I get the error:
IndexError: indexp 1 is out of bounds for axis 0 with size 1
If i just use the binary_crossentropy function, I get the same error. My data is one-hot encoded. How can I resolve this error? Thanks