Home

Llama

LLaMA, short for Large Language Model Meta AI, is a family of transformer-based large language models developed by Meta AI. Released in 2023, the LLaMA line was positioned as an efficient and accessible set of foundation models intended for research and commercial use. Like other autoregressive language models, LLaMA generates text by predicting subsequent tokens from a given prompt, and it can be adapted for tasks such as completion, summarization, translation, and question answering.

The LLaMA-1 generation offered models of approximately 7B, 13B, 30B, and 65B parameters, trained on a mixture

Access to LLaMA weights initially followed a restricted invitation model, limiting usage to researchers and organizations

Since its release, LLaMA has been used in academia and industry for a range of natural language

of
publicly
available
text
data
and
licensed
sources
using
a
high-performance
transformer
architecture.
In
2023,
Meta
introduced
LLaMA-2,
a
follow-on
series
available
in
sizes
including
7B,
13B,
and
70B
parameters,
with
updates
emphasizing
improved
instruction-following
and
safety
features
and
broader
licensing
terms
to
support
research
and
commercial
use.
under
specific
terms.
With
LLaMA-2,
Meta
released
the
models
under
licenses
designed
to
broaden
access
for
both
research
and
commercial
applications,
though
users
must
comply
with
the
license
terms
and
responsible-use
guidelines.
processing
tasks,
including
benchmarking,
prototype
development,
and
as
a
basis
for
instruction-tuned
variants
by
researchers
and
practitioners.
The
models
contribute
to
ongoing
discussions
about
open
access
to
large
language
models
and
the
balance
between
performance,
safety,
and
licensing.