Experimental Scala 3 bindings for llama.cpp using Slinc.
Add llm4s
to your build.sbt
:
libraryDependencies += "com.donderom" %% "llm4s" % "0.11.0"
For JDK 17 add .jvmopts
file in the project root:
--add-modules=jdk.incubator.foreign
--enable-native-access=ALL-UNNAMED
Version compatibility:
llm4s | Scala | JDK | llama.cpp (commit hash) |
---|---|---|---|
0.11+ | 3.3.0 | 17, 19 | 229ffff (May 8, 2024) |
llm4s | Scala | JDK | llama.cpp (commit hash) |
---|---|---|---|
0.10+ | 3.3.0 | 17, 19 | 49e7cb5 (Jul 31, 2023) |
0.6+ | --- | --- | 49e7cb5 (Jul 31, 2023) |
0.4+ | --- | --- | 70d26ac (Jul 23, 2023) |
0.3+ | --- | --- | a6803ca (Jul 14, 2023) |
0.1+ | 3.3.0-RC3 | 17, 19 | 447ccbe (Jun 25, 2023) |
import java.nio.file.Paths
import com.donderom.llm4s.*
// Path to the llama.cpp shared library
System.load("llama.cpp/libllama.so")
// Path to the model supported by llama.cpp
val model = Paths.get("models/llama-7b-v2/llama-2-7b.Q4_K_M.gguf")
val prompt = "Large Language Model is"
val llm = Llm(model)
// To print generation as it goes
llm(prompt).foreach: stream =>
stream.foreach: token =>
print(token)
// Or build a string
llm(prompt).foreach(stream => println(stream.mkString))
llm.close()
val llm = Llm(model)
llm.embeddings(prompt).foreach: embeddings =>
embeddings.foreach: embd =>
print(embd)
print(' ')
llm.close()