bpy
Bishnupriya āĻŦāĻŋāώā§āĻŖā§āĻĒā§āϰāĻŋāϝāĻŧāĻž āĻŽāĻŖāĻŋāĻĒā§āϰā§
32,965 Words in vocabulary
4.93x Best compression
0.6926 Best isotropy
Sample text
Excerpts from Bishnupriya Wikipedia articles.
āĻāĻĨāĻžāĻ āĻŦāĻŋāώā§āĻŖā§āĻĒā§āϰāĻŋāϝāĻŧāĻž āĻŽāĻŖāĻŋāĻĒā§āϰ⧠āĻ āĻžāϰāϰ āĻ āύāĻŋāϝāĻŧāĻŽāĻŋāϤ āĻĒāϤā§āϰāĻŋāĻāĻž āĻāĻšāĻžāύ, āϝā§āĻšāĻžāύ āϏāĻāĻā§āϰāĻžāĻŽ āϏāĻŋāĻāĻšāϰ āϏāĻŽā§āĻĒāĻž...
.āĻāĻŽāĻ(.mo) āĻāĻ āĻŽāĻžāĻāĻžāĻāϰ āύāĻžāĻā§ āϞā§āĻĒāĻāϰāĻŋāϏāĻŋ āĻāĻŋāĻāĻĒāĻž āĻĄāĻŽā§āĻāύāĻ (ccTLD)āĨ¤ āĻŽāĻŋāϞāĻžāĻĒ āĻāĻāĻāĻāύāĻ-āϰ āĻŽāĻžāĻāĻžāĻāϰ āϤāĻĨ...
āĻŦāĻžāĻāϞāĻžāĻĻā§āĻļāϰ āϏā§āĻĨāĻžāύā§āϝāĻŧ āϏāϰāĻāĻžāϰāϰ āϏāĻŋāĻāĻŋāϞ⧠āĻāϏā§āϤāĻžāĻ āĻāĻŋāϞāĻž āĻĒāϰāĻŋāώāĻĻ āϏāĻŋāĻāĻŋ āĻāϰā§āĻĒā§āϰā§āĻļāύ (ā§ŦāĻ) āĻĨāĻžāύāĻž āĻŦāĻžāϰā§...
Most common words
The 20 most frequently used words in Bishnupriya Wikipedia.
Interactive playground
Explore Bishnupriya interactively with browser-based demos.
Performance dashboard
Key metrics for all model types at a glance.
Quick start
Tokenizer
from wikilangs import tokenizer
tok = tokenizer('latest', 'bpy', 32000)
tokens = tok.tokenize("Your text here") N-gram
from wikilangs import ngram
ng = ngram('latest', 'bpy', gram_size=3)
score = ng.score("Your text here") Markov chain
from wikilangs import markov
mc = markov('latest', 'bpy', depth=3)
text = mc.generate(length=50) Vocabulary
from wikilangs import vocabulary
vocab = vocabulary('latest', 'bpy')
info = vocab.lookup("word") Embeddings
from wikilangs import embeddings
emb = embeddings('latest', 'bpy', dimension=64)
vec = emb.embed_word("word") Available models
| Model Type | Variants | Description |
|---|---|---|
| Tokenizers | 8k, 16k, 32k, 64k | BPE tokenizers with different vocabulary sizes |
| N-gram (Word) | 2, 3, 4, 5-gram | Word-level language models |
| N-gram (Subword) | 2, 3, 4, 5-gram | Subword-level language models |
| Markov (Word) | Depth 1â5 | Word-level text generation |
| Markov (Subword) | Depth 1â5 | Subword-level text generation |
| Vocabulary | â | Word dictionary with frequency and IDF |
| Embeddings | 32d, 64d, 128d | Position-aware word embeddings |
Model evaluation
Tokenizer performance
Compression ratios and token statistics across vocabulary sizes.

N-gram evaluation
Perplexity and entropy metrics across n-gram sizes.

Markov chain evaluation
Entropy and branching factor by context depth.

Vocabulary analysis
Word frequency distribution and Zipf's law analysis.


Embeddings evaluation
Isotropy and vector space quality metrics.

Full research report
Access the complete ablation study with all metrics, visualizations, and generated text samples on HuggingFace.
View on HuggingFace â