as

Assamese āĻ…āϏāĻŽā§€āϝāĻŧāĻž

ISO 639-3: asz I L
225,407 Words in vocabulary
4.54x Best compression
0.8547 Best isotropy

Sample text

Excerpts from Assamese Wikipedia articles.

āϜāϝāĻŧāύāĻ—ā§° āĻŽāϜāĻŋāϞāĻĒ⧁⧰ āĻ­āĻžā§°āϤ⧰ āĻĒāĻļā§āϚāĻŋāĻŽāĻŦāĻ‚āĻ— ā§°āĻžāĻœā§āϝ⧰ āĻĻāĻ•ā§āώāĻŋāĻŖ āϚāĻŦā§āĻŦāĻŋāĻļ āĻĒā§°āĻ—āύāĻž āϜāĻŋāϞāĻžāϤ āĻ…ā§ąāĻ¸ā§āĻĨāĻŋāϤ āĻāĻ–āύ āϚāĻšā§°āĨ¤...
āĻšāĻžāĻŦ⧁āĻ‚ āĻŽā§ˆāĻĻāĻžāĻŽ āĻšā§ˆāϛ⧇ āφāĻšā§‹āĻŽāϏāĻ•āϞ⧰ āĻĒāĻžā§āϚāĻŽā§°āĻžāϜāϧāĻžāύ⧀ āĻšāĻžāĻŦ⧁āς⧰ āϟāĻžāχāϭ⧇āϟāĻŋāϤ āĻ…ā§ąāĻ¸ā§āĻĨāĻŋāϤ āĻĻ⧁āϟāĻž āĻĒā§ā§°āĻžāĻšā§€āύ āĻŽā§ˆāĻĻāĻž...
āĻ­āĻžā§°āϤ⧀āϝāĻŧ āĻ¨ā§āϝāĻžāϝāĻŧ āϏāĻ‚āĻšāĻŋāϤāĻž (IAST: BhāratÄĢya Nyāya Saᚃhitā), āĻ­āĻžā§°āϤ⧀āϝāĻŧ āĻ—āĻŖā§°āĻžāĻœā§āϝ⧰ āĻ…āĻĒā§°āĻžāϧ āϏāĻ‚...

Most common words

The 20 most frequently used words in Assamese Wikipedia.

Top 20 words in Assamese

Performance dashboard

Key metrics for all model types at a glance.

Performance dashboard for Assamese

Quick start

Tokenizer

from wikilangs import tokenizer
tok = tokenizer('latest', 'as', 32000)
tokens = tok.tokenize("Your text here")

N-gram

from wikilangs import ngram
ng = ngram('latest', 'as', gram_size=3)
score = ng.score("Your text here")

Markov chain

from wikilangs import markov
mc = markov('latest', 'as', depth=3)
text = mc.generate(length=50)

Vocabulary

from wikilangs import vocabulary
vocab = vocabulary('latest', 'as')
info = vocab.lookup("word")

Embeddings

from wikilangs import embeddings
emb = embeddings('latest', 'as', dimension=64)
vec = emb.embed_word("word")

Available models

Model Type Variants Description
Tokenizers8k, 16k, 32k, 64kBPE tokenizers with different vocabulary sizes
N-gram (Word)2, 3, 4, 5-gramWord-level language models
N-gram (Subword)2, 3, 4, 5-gramSubword-level language models
Markov (Word)Depth 1–5Word-level text generation
Markov (Subword)Depth 1–5Subword-level text generation
Vocabulary—Word dictionary with frequency and IDF
Embeddings32d, 64d, 128dPosition-aware word embeddings

Model evaluation

Tokenizer performance

Compression ratios and token statistics across vocabulary sizes.

Tokenizer compression

N-gram evaluation

Perplexity and entropy metrics across n-gram sizes.

N-gram perplexity

Markov chain evaluation

Entropy and branching factor by context depth.

Markov entropy

Vocabulary analysis

Word frequency distribution and Zipf's law analysis.

Zipf's law
Top 20 words

Embeddings evaluation

Isotropy and vector space quality metrics.

Embedding isotropy

Full research report

Access the complete ablation study with all metrics, visualizations, and generated text samples on HuggingFace.

View on HuggingFace →