Lugh 0.12 Latest
Kwalitee Issues
No Core Issues.
- meta_yml_has_provides
-
Add all modules contained in this distribution to the META.yml field 'provides'. Module::Build or Dist::Zilla::Plugin::MetaProvides do this automatically for you.
- has_separate_license_file
-
This is not a critical issue. Currently mainly informative for the CPANTS authors. It might be removed later.
- has_security_doc
-
Add SECURITY(.pod|md). See Software::Security::Policy.
- security_doc_contains_contact
-
Add SECURITY(.pod|md) and add a contact address. See Software::Security::Policy.
- has_contributing_doc
-
Add CONTRIBUTING(.pod|md). See https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/setting-guidelines-for-repository-contributors.
Modules
| Name | Abstract | Version | View |
|---|---|---|---|
| Lugh | Pure C LLM Inference Engine for Perl (built on ggml) | 0.12 | metacpan |
| Lugh::Autograd | Automatic differentiation for Lugh tensors | 0.12 | metacpan |
| Lugh::Autograd::Ops | Differentiable operations for automatic differentiation | 0.12 | metacpan |
| Lugh::Autograd::Tensor | Tensor with automatic differentiation support | 0.12 | metacpan |
| Lugh::Context | Memory Context for Tensor Allocation | 0.12 | metacpan |
| Lugh::Graph | Computation Graph for Tensor Operations | 0.12 | metacpan |
| Lugh::Inference | Transformer Forward Pass and Token Generation | 0.12 | metacpan |
| Lugh::KVCache | KV Cache for efficient incremental decoding | 0.12 | metacpan |
| Lugh::LoRA | Low-Rank Adaptation (LoRA) adapter support for Lugh | 0.12 | metacpan |
| Lugh::MemoryPool | Reusable compute resources for efficient inference | 0.12 | metacpan |
| Lugh::Model | GGUF Model Loading and Tensor Access | 0.12 | metacpan |
| Lugh::Ops | Tensor Operations for Neural Network Computation | 0.12 | metacpan |
| Lugh::Optimizer | Optimization algorithms for Lugh training | 0.12 | metacpan |
| Lugh::Optimizer::AdamW | Adam optimizer with decoupled weight decay | 0.12 | metacpan |
| Lugh::Optimizer::LRScheduler | Learning rate scheduling for optimizers | 0.12 | metacpan |
| Lugh::Optimizer::SGD | Stochastic Gradient Descent optimizer | 0.12 | metacpan |
| Lugh::Prompt | Chat Template Formatting for LLM Conversations | 0.12 | metacpan |
| Lugh::Quant | Quantization utilities for Lugh tensors | 0.12 | metacpan |
| Lugh::RoPE | RoPE (Rotary Position Embedding) Scaling Configuration | 0.12 | metacpan |
| Lugh::Speculative | Speculative decoding for faster LLM inference | 0.12 | metacpan |
| Lugh::Tensor | N-Dimensional Tensor with ggml Backend | 0.12 | metacpan |
| Lugh::Tokenizer | BPE Tokenizer for Text Encoding and Decoding | 0.12 | metacpan |
| Lugh::Train | High-level training API for Lugh | 0.12 | metacpan |