Upstream information
Description
llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721.SUSE information
Overall state of this security issue: Does not affect SUSE products
This issue is currently rated as having important severity.
CNA (GitHub) | National Vulnerability Database | SUSE | |
---|---|---|---|
Base Score | 8.6 | 8.8 | 8.6 |
Vector | CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:C/C:H/I:H/A:H | CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H | CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:C/C:H/I:H/A:H |
Attack Vector | Local | Network | Local |
Attack Complexity | Low | Low | Low |
Privileges Required | None | None | None |
User Interaction | Required | Required | Required |
Scope | Changed | Unchanged | Changed |
Confidentiality Impact | High | High | High |
Integrity Impact | High | High | High |
Availability Impact | High | High | High |
CVSSv3 Version | 3.1 | 3.1 | 3.1 |
SUSE Timeline for this CVE
CVE page created: Tue Jun 24 20:00:45 2025CVE page last modified: Thu Aug 28 14:18:33 2025