PunktSentenceTokenizer
PunktSentenceTokenizer is a sentence tokenizer in the Natural Language Toolkit (NLTK). It implements the unsupervised Punkt sentence segmentation algorithm developed by Kiss and Strunk in 1996 and is designed to split text into sentences without relying on a fixed, language-specific list of abbreviations.
The tokenizer is trainable and language-aware. It uses a small sample of text to learn abbreviation lists
Usage involves calling the tokenize method on a string to obtain a list of sentence strings. Pretrained
Limitations include potential errors with uncommon abbreviations, abbreviations not seen in training, or heavily nonstandard punctuation.