bh
Bihari рднреЛрдЬрдкреБрд░реА
38,630 Words in vocabulary
4.11x Best compression
0.8673 Best isotropy
Sample text
Excerpts from Bihari Wikipedia articles.
рдиреЗрд▓реНрд╕рди рдордВрдбреЗрд▓рд╛ рджрдХреНрдЦрд┐рди рдЕрдлрд┐рд░рдХрд╛ рдХреЗ рдкрд╣рд┐рд▓рд╛ рдХрд░рд┐рдпрд╛ рд░рд╛рд╖реНрдЯреНрд░рдкрддрд┐ рдЖ рдкрд╣рд┐рд▓рд╛ рдЪреБрдирд▓ рдЧрдЗрд▓ рд░рд╛рд╖реНрдЯреНрд░рдкрдд...
рдмрдмреБрдЖ рдХрд▓рд╛рдВ рднрд╛рд░рдд рдХреЗ рдЭрд╛рд░рдЦрдВрдб рд░рд╛рдЬреНрдп рдореЗрдВ рдПрдХ рдареЛ рдХрд╕рдмрд╛ рдмрд╛рдЯреЗред рдХреЗ рд╢рд╣рд░тАПтАО рдЖ рдХрд╕реНрдмрд╛
рдШрдЯрдирд╛ рдЬрдирдо - рдордиреНрдордердирд╛рде рдЧреБрдкреНрдд - рднрд╛рд░рддреАрдп рд╕реНрд╡рддрдиреНрддреНрд░рддрд╛ рд╕рдВрдЧреНрд░рд╛рдо рдХ рдПрдЧреЛ рдкреНрд░рдореБрдЦ рдХреНрд░рд╛рдиреНрддрд┐рдХрд╛рд░реА...
Most common words
The 20 most frequently used words in Bihari Wikipedia.
Interactive playground
Explore Bihari interactively with browser-based demos.
Performance dashboard
Key metrics for all model types at a glance.
Quick start
Tokenizer
from wikilangs import tokenizer
tok = tokenizer('latest', 'bh', 32000)
tokens = tok.tokenize("Your text here") N-gram
from wikilangs import ngram
ng = ngram('latest', 'bh', gram_size=3)
score = ng.score("Your text here") Markov chain
from wikilangs import markov
mc = markov('latest', 'bh', depth=3)
text = mc.generate(length=50) Vocabulary
from wikilangs import vocabulary
vocab = vocabulary('latest', 'bh')
info = vocab.lookup("word") Embeddings
from wikilangs import embeddings
emb = embeddings('latest', 'bh', dimension=64)
vec = emb.embed_word("word") Available models
| Model Type | Variants | Description |
|---|---|---|
| Tokenizers | 8k, 16k, 32k, 64k | BPE tokenizers with different vocabulary sizes |
| N-gram (Word) | 2, 3, 4, 5-gram | Word-level language models |
| N-gram (Subword) | 2, 3, 4, 5-gram | Subword-level language models |
| Markov (Word) | Depth 1тАУ5 | Word-level text generation |
| Markov (Subword) | Depth 1тАУ5 | Subword-level text generation |
| Vocabulary | тАФ | Word dictionary with frequency and IDF |
| Embeddings | 32d, 64d, 128d | Position-aware word embeddings |
Model evaluation
Tokenizer performance
Compression ratios and token statistics across vocabulary sizes.

N-gram evaluation
Perplexity and entropy metrics across n-gram sizes.

Markov chain evaluation
Entropy and branching factor by context depth.

Vocabulary analysis
Word frequency distribution and Zipf's law analysis.


Embeddings evaluation
Isotropy and vector space quality metrics.

Full research report
Access the complete ablation study with all metrics, visualizations, and generated text samples on HuggingFace.
View on HuggingFace тЖТ