Enhanced Model Quality: Jamba Large 1.6 outperforms leading open models from Cohere, Meta, and Mistral on quality (Arena Hard) and speed.
Long context processing: With a 256K context window and hybrid SSM-Transformer architecture, Jamba excels on efficiently and accurately processing long contexts, outperforming leading open model competitors on RAG and long context QA benchmarks
Secure Deployment: Available via AI21 Studio (SaaS) or to download from Hugging Face and deploy privately (VPC/on-prem) from Hugging Face. More deployment options coming soon.
Improved Efficiency: Faster response times with high accuracy.