awa

Awadhi рдЕрд╡рдзреА

ISO 639-3: awa I L
16,641 Words in vocabulary
3.89x Best compression
0.7358 Best isotropy

Sample text

Excerpts from Awadhi Wikipedia articles.

рдиреАрд▓рдо рд╕рдВрдЬреАрд╡ рд░реЗрдбреНрдбреА (реирен рдЕрдХреНрддреВрдмрд░ - реп рдирд╡рдВрдмрд░ рднрд╛рд░рдд рдХрдп рдЫрдард╡рд╛ рд░рд╛рд╖реНрдЯреНрд░рдкрддрд┐ рд░рд╣реЗред рд╡рдирдХрдп рдХрд╛рд░реНрдпрдХ...
рдирдХреБрдб, рднрд╛рд░рдд рджреЗрд╢ рдХреЗ рдЙрддреНрддрд░ рдкреНрд░рджреЗрд╢ рдкреНрд░рд╛рдиреНрдд рдХреЗ рд╕рд╣рд╛рд░рдирдкреБрд░ рдЬрд┐рд▓рд╛ рдХрдп рдПрдХреНрдареБ рдирдЧрд░ рдкрд╛рд▓рд┐рдХрд╛ рдкрд░рд┐рд╖...
рдирд╕реАрд░рд╛рдмрд╛рдж, рднрд╛рд░рдд рджреЗрд╢ рдХреЗ рдЙрддреНрддрд░ рдкреНрд░рджреЗрд╢ рдкреНрд░рд╛рдиреНрдд рдХреЗ рд░рд╛рдпрдмрд░реЗрд▓реА рдЬрд┐рд▓рд╛ рдХрдп рдПрдХреНрдареБ рдирдЧрд░ рдкрдВрдЪрд╛рдпрдд ...

Most common words

The 20 most frequently used words in Awadhi Wikipedia.

Top 20 words in Awadhi

Performance dashboard

Key metrics for all model types at a glance.

Performance dashboard for Awadhi

Quick start

Tokenizer

from wikilangs import tokenizer
tok = tokenizer('latest', 'awa', 32000)
tokens = tok.tokenize("Your text here")

N-gram

from wikilangs import ngram
ng = ngram('latest', 'awa', gram_size=3)
score = ng.score("Your text here")

Markov chain

from wikilangs import markov
mc = markov('latest', 'awa', depth=3)
text = mc.generate(length=50)

Vocabulary

from wikilangs import vocabulary
vocab = vocabulary('latest', 'awa')
info = vocab.lookup("word")

Embeddings

from wikilangs import embeddings
emb = embeddings('latest', 'awa', dimension=64)
vec = emb.embed_word("word")

Available models

Model Type Variants Description
Tokenizers8k, 16k, 32k, 64kBPE tokenizers with different vocabulary sizes
N-gram (Word)2, 3, 4, 5-gramWord-level language models
N-gram (Subword)2, 3, 4, 5-gramSubword-level language models
Markov (Word)Depth 1тАУ5Word-level text generation
Markov (Subword)Depth 1тАУ5Subword-level text generation
VocabularyтАФWord dictionary with frequency and IDF
Embeddings32d, 64d, 128dPosition-aware word embeddings

Model evaluation

Tokenizer performance

Compression ratios and token statistics across vocabulary sizes.

Tokenizer compression

N-gram evaluation

Perplexity and entropy metrics across n-gram sizes.

N-gram perplexity

Markov chain evaluation

Entropy and branching factor by context depth.

Markov entropy

Vocabulary analysis

Word frequency distribution and Zipf's law analysis.

Zipf's law
Top 20 words

Embeddings evaluation

Isotropy and vector space quality metrics.

Embedding isotropy

Full research report

Access the complete ablation study with all metrics, visualizations, and generated text samples on HuggingFace.

View on HuggingFace тЖТ