DeepCoder-14B-Preview - AI Language Models Tool
Overview
DeepCoder-14B-Preview is a code-reasoning large language model fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning (GRPO+) and iterative context lengthening. It targets improved long-context code generation and reasoning and reports competitive scores on LiveCodeBench; the model is released under the MIT License on Hugging Face.
Key Features
- Fine-tuned from DeepSeek-R1-Distilled-Qwen-14B
- Trained with distributed reinforcement learning (GRPO+)
- Iterative context lengthening for extended context handling
- Optimized for long-context code generation and reasoning
- Competitive performance on LiveCodeBench and related benchmarks
- Released under the MIT License on Hugging Face
Ideal Use Cases
- Generate and complete long-source code files
- Analyze and reason about multi-file codebases
- Benchmarking and research on code LLM capabilities
- Integrate into developer tooling for code assistance
- Prototype code-based assistants under an MIT license
Getting Started
- Open the Hugging Face model page: https://huggingface.co/agentica-org/DeepCoder-14B-Preview
- Read the model card and licensing details
- Follow repository or model card usage and API instructions
- Test with representative long-context code prompts
- Monitor outputs and adjust context length or prompt design
Pricing
No pricing information provided; model is available on Hugging Face under the MIT License.
Limitations
- Preview release; functionality, availability, or performance may change
- Pricing and hosting costs are not specified on the model page
Key Information
- Category: Language Models
- Type: AI Language Models Tool