ml

Malayalam เดฎเดฒเดฏเดพเดณเด‚

ISO 639-1: ml ISO 639-3: mal I L
801,863 Words in vocabulary
5.37x Best compression
0.7956 Best isotropy

Sample text

Excerpts from Malayalam Wikipedia articles.

เดธเดฟเดฑเดฟเดฏ เดชเดฐเดธเตเดฏเดฎเดพเดฏเดฟ เดคเตเด•เตเด•เตเดถเดฟเด•เตเดท เดจเดŸเดชเตเดชเดฟเดฒเดพเด•เตเด•เดพเดฑเตเดฃเตเดŸเต. เดฐเดฃเตเดŸเต เดœเต‚เดคเดจเตเดฎเดพเดฐเต†เดฏเตเด‚ เด‡เดธเตเดฐเดพเดฏเต‡เตฝ เดšเดพเดฐเตป...
เดœเต†เดธเตเดจเดฑเดฟเดฏเต‡เดธเต€ เด•เตเดŸเตเด‚เดฌเดคเตเดคเดฟเดฒเต† เดชเต‚เดšเตเดšเต†เดŸเดฟเด•เดณเตเดŸเต† เด’เดฐเต เด‡เดจเดฎเดพเดฃเต เด…เดšเตเดšเดฟเดฎเต†เดจเต†เดธเต เดธเต†เดฑเตเดฑเต‹เดจ. เดฒเดพเดฃเต เดŽเดšเตเดš...
เดšเตเดตเดจเตเดจ เดธเต‚เดฐเตเดฏเด•เดพเดจเตเดคเดฟ เด…เดฒเตเดฒเต†เด™เตเด•เดฟเตฝ เดฎเต†เด•เตเดธเดฟเด•เตเด•เตป เดธเต‚เดฐเตเดฏเด•เดพเดจเตเดคเดฟ (Tithonia rotundifolia) เดŽเดจเต...

Most common words

The 20 most frequently used words in Malayalam Wikipedia.

Top 20 words in Malayalam

Performance dashboard

Key metrics for all model types at a glance.

Performance dashboard for Malayalam

Quick start

Tokenizer

from wikilangs import tokenizer
tok = tokenizer('latest', 'ml', 32000)
tokens = tok.tokenize("Your text here")

N-gram

from wikilangs import ngram
ng = ngram('latest', 'ml', gram_size=3)
score = ng.score("Your text here")

Markov chain

from wikilangs import markov
mc = markov('latest', 'ml', depth=3)
text = mc.generate(length=50)

Vocabulary

from wikilangs import vocabulary
vocab = vocabulary('latest', 'ml')
info = vocab.lookup("word")

Embeddings

from wikilangs import embeddings
emb = embeddings('latest', 'ml', dimension=64)
vec = emb.embed_word("word")

Available models

Model Type Variants Description
Tokenizers8k, 16k, 32k, 64kBPE tokenizers with different vocabulary sizes
N-gram (Word)2, 3, 4, 5-gramWord-level language models
N-gram (Subword)2, 3, 4, 5-gramSubword-level language models
Markov (Word)Depth 1โ€“5Word-level text generation
Markov (Subword)Depth 1โ€“5Subword-level text generation
Vocabularyโ€”Word dictionary with frequency and IDF
Embeddings32d, 64d, 128dPosition-aware word embeddings

Model evaluation

Tokenizer performance

Compression ratios and token statistics across vocabulary sizes.

Tokenizer compression

N-gram evaluation

Perplexity and entropy metrics across n-gram sizes.

N-gram perplexity

Markov chain evaluation

Entropy and branching factor by context depth.

Markov entropy

Vocabulary analysis

Word frequency distribution and Zipf's law analysis.

Zipf's law
Top 20 words

Embeddings evaluation

Isotropy and vector space quality metrics.

Embedding isotropy

Full research report

Access the complete ablation study with all metrics, visualizations, and generated text samples on HuggingFace.

View on HuggingFace โ†’