#tokenize #split #segmenter #word

segtok

Sentence segmentation and word tokenization tools

6 releases

0.1.5 Feb 18, 2025
0.1.4 Feb 8, 2025
0.1.3 Jan 21, 2025

#1352 in Text processing

Download history 117/week @ 2025-01-07 180/week @ 2025-01-14 123/week @ 2025-01-21 38/week @ 2025-01-28 627/week @ 2025-02-04 20/week @ 2025-02-11 164/week @ 2025-02-18 9/week @ 2025-02-25 4/week @ 2025-03-04 110/week @ 2025-03-11 5/week @ 2025-03-18 3/week @ 2025-03-25 69/week @ 2025-04-01

188 downloads per month
Used in yake-rust

MIT license

63KB
1.5K SLoC

segtok

A rule-based sentence segmenter (splitter) and a word tokenizer using orthographic features. Ported from the python package (not maintained anymore), and fixes the contractions bug.

use segtok::{segmenter::*, tokenizer::*};

fn main() {
    let input = include_str!("../tests/test_google.txt");

    let sentences: Vec<Vec<_>> = split_multi(input, SegmentConfig::default())
        .into_iter()
        .map(|span| split_contractions(web_tokenizer(&span)).collect())
        .collect();
}

Dependencies

~3–4MB
~75K SLoC