zgh

Standard Moroccan Tamazight โตœโดฐโตŽโดฐโตฃโต‰โต–โตœ โตœโดฐโตโดฐโตกโดฐโตขโตœ

ISO 639-3: zgh I L
35,191 Words in vocabulary
3.84x Best compression
0.7259 Best isotropy

Sample text

Excerpts from Standard Moroccan Tamazight Wikipedia articles.

thumb โดฑโต‰ โดฑโต‰ โต™โต‰ โตโต– BBC (โต™ โตœโตโดณโตโต‰โตฃโตœ: British Broadcasting Corporation) โต‰โต™โดฐโต–โต“โตโต
โดฐโดณโดฐโดทโดฐโตฃ โดฐโดผโต•โดฐโตโตšโต‰โตš โต‰โดณโดฐ โดฐโดณโดทโต“โตฃ โดท โดฐโต™โดทโดทโต‰ โต โตกโดฐโต™โต–โตโตฃโต‰ โดณ โตœโดฐโดทโดทโต“โต”โตœ โตœโดฐโดผโต•โดฐโตโตšโต‰โตšโตœ, โต โต“โต”โตโตขโดฐโตโตฃ โดฐโตŽโดฐโตข...
โต„โดฑโดทโตโดผโตœโตœโดฐโตƒ โต™โต™โต‰โต™โต‰ (โต™ โตœโดฐโต„โต•โดฐโดฑโตœ: ุนุจุฏ ุงู„ูุชุงุญ ุงู„ุณูŠุณูŠ), โต‰โตโต“โต โดณ 19 โตโต“โตกโดฐโตโดฑโต‰โต” โดณ โตœโต‡โดฐโต€โต‰โต”โตœ, โต‰โดณ...

Most common words

The 20 most frequently used words in Standard Moroccan Tamazight Wikipedia.

Top 20 words in Standard Moroccan Tamazight

Performance dashboard

Key metrics for all model types at a glance.

Performance dashboard for Standard Moroccan Tamazight

Quick start

Tokenizer

from wikilangs import tokenizer
tok = tokenizer('latest', 'zgh', 32000)
tokens = tok.tokenize("Your text here")

N-gram

from wikilangs import ngram
ng = ngram('latest', 'zgh', gram_size=3)
score = ng.score("Your text here")

Markov chain

from wikilangs import markov
mc = markov('latest', 'zgh', depth=3)
text = mc.generate(length=50)

Vocabulary

from wikilangs import vocabulary
vocab = vocabulary('latest', 'zgh')
info = vocab.lookup("word")

Embeddings

from wikilangs import embeddings
emb = embeddings('latest', 'zgh', dimension=64)
vec = emb.embed_word("word")

Available models

Model Type Variants Description
Tokenizers8k, 16k, 32k, 64kBPE tokenizers with different vocabulary sizes
N-gram (Word)2, 3, 4, 5-gramWord-level language models
N-gram (Subword)2, 3, 4, 5-gramSubword-level language models
Markov (Word)Depth 1โ€“5Word-level text generation
Markov (Subword)Depth 1โ€“5Subword-level text generation
Vocabularyโ€”Word dictionary with frequency and IDF
Embeddings32d, 64d, 128dPosition-aware word embeddings

Model evaluation

Tokenizer performance

Compression ratios and token statistics across vocabulary sizes.

Tokenizer compression

N-gram evaluation

Perplexity and entropy metrics across n-gram sizes.

N-gram perplexity

Markov chain evaluation

Entropy and branching factor by context depth.

Markov entropy

Vocabulary analysis

Word frequency distribution and Zipf's law analysis.

Zipf's law
Top 20 words

Embeddings evaluation

Isotropy and vector space quality metrics.

Embedding isotropy

Full research report

Access the complete ablation study with all metrics, visualizations, and generated text samples on HuggingFace.

View on HuggingFace โ†’