Processing before deep neural networks

Before feeding data into any neural network, we must first tokenize the data and then convert the data to sequences. For this purpose, we use the Keras Tokenizer provided with TensorFlow, setting it using a maximum number of words limit of 200,000 and a maximum sequence length of 40. Any sentence with more than 40 words is consequently cut off to its first 40 words:

Tokenizer = tf.keras.preprocessing.text.Tokenizer pad_sequences = tf.keras.preprocessing.sequence.pad_sequencestk = Tokenizer(num_words=200000) max_len = 40

After setting the Tokenizer, tk, this is fitted on the concatenated list of the first and second questions, thus learning all the possible word terms present in the learning corpus:

Get TensorFlow Deep Learning Projects now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.