These simple ideas are widespread and fairly effective for a lot of tasks. They are particularly useful in reducing the number of unique tokens in a document for your processing.
spaCy has already marked each token as a stop word or not and stored it in the is_stop attribute of each token. This makes it very handy for text cleaning. Let's take a quick look:
sentence_example = "the AI/AGI uprising cannot happen without the progress of NLP"[(token, token.is_stop, token.is_punct) for token in nlp(sentence_example)] [(the, True, False), (AI, False, False), (/, False, True), (AGI, True, False), (uprising, False, False), (can, True, False), (not, True, False), (happen, False, False), (without, True, False), ...