sa

Sanskrit рд╕рдВрд╕реНрдХреГрддрдореН

ISO 639-3: sax I L
183,249 Words in vocabulary
4.44x Best compression
0.8264 Best isotropy

Sample text

Excerpts from Sanskrit Wikipedia articles.

рд╕рдГ рдпрд╛рджрд╡рдХреБрд▓рд╕реНрдп рд░рд╛рдЬрд╛ рдЖрд╕реАрддреНред рдкреНрд░рд╛рдЪреАрдирд╡рдВрд╢рд╛рд╡рд▓реА рд╕реНрдЯрдмреНрд╕реН рдкреНрд░рд╛рдкреНрддрдГ рднрд╛рд╖рд╛рдиреБрдмрдиреНрдзрдГ рдЕрдкреВрд░реНрдгрд▓реЗрдЦрд╛...
рд╕рдГ рдЕрдпреЛрдзреНрдпрд╛рдХреБрд▓рд╕реНрдп рд░рд╛рдЬрд╛ рдЖрд╕реАрддреНред рдкреНрд░рд╛рдЪреАрди-рд╡рдВрд╢рд╛рд╡рд▓реА рдЕрдпреЛрдзреНрдпрд╛рдХреБрд▓ рд╕реНрдЯрдмреНрд╕реН рдЕрдкреВрд░реНрдгрд▓реЗрдЦрд╛рдГ рдпреЛрдЬрди...
рд╕реНрд╡рд░реНрдгрдЧреМрд░реАрд╡реНрд░рддрдореН рдЗрддреНрдпреБрдХреНрддреЗ рдЧреМрд░реАрддреГрддреАрдпрд╛ рдПрд╡ ред рддрддреНрд░ рджреНрд░рд╖реНрдЯрд╡реНрдпрдореН ред рд╕реНрдЯрдмреНрд╕реН рдЕрдкреВрд░реНрдгрд▓реЗрдЦрд╛...

Most common words

The 20 most frequently used words in Sanskrit Wikipedia.

Top 20 words in Sanskrit

Performance dashboard

Key metrics for all model types at a glance.

Performance dashboard for Sanskrit

Quick start

Tokenizer

from wikilangs import tokenizer
tok = tokenizer('latest', 'sa', 32000)
tokens = tok.tokenize("Your text here")

N-gram

from wikilangs import ngram
ng = ngram('latest', 'sa', gram_size=3)
score = ng.score("Your text here")

Markov chain

from wikilangs import markov
mc = markov('latest', 'sa', depth=3)
text = mc.generate(length=50)

Vocabulary

from wikilangs import vocabulary
vocab = vocabulary('latest', 'sa')
info = vocab.lookup("word")

Embeddings

from wikilangs import embeddings
emb = embeddings('latest', 'sa', dimension=64)
vec = emb.embed_word("word")

Available models

Model Type Variants Description
Tokenizers8k, 16k, 32k, 64kBPE tokenizers with different vocabulary sizes
N-gram (Word)2, 3, 4, 5-gramWord-level language models
N-gram (Subword)2, 3, 4, 5-gramSubword-level language models
Markov (Word)Depth 1тАУ5Word-level text generation
Markov (Subword)Depth 1тАУ5Subword-level text generation
VocabularyтАФWord dictionary with frequency and IDF
Embeddings32d, 64d, 128dPosition-aware word embeddings

Model evaluation

Tokenizer performance

Compression ratios and token statistics across vocabulary sizes.

Tokenizer compression

N-gram evaluation

Perplexity and entropy metrics across n-gram sizes.

N-gram perplexity

Markov chain evaluation

Entropy and branching factor by context depth.

Markov entropy

Vocabulary analysis

Word frequency distribution and Zipf's law analysis.

Zipf's law
Top 20 words

Embeddings evaluation

Isotropy and vector space quality metrics.

Embedding isotropy

Full research report

Access the complete ablation study with all metrics, visualizations, and generated text samples on HuggingFace.

View on HuggingFace тЖТ