We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent c1d92ce commit 6bbeea0Copy full SHA for 6bbeea0
README.md
@@ -266,7 +266,8 @@ Then you'll need to use a custom chat handler to load the clip model and process
266
>>> llm = Llama(
267
model_path="./path/to/llava/llama-model.gguf",
268
chat_handler=chat_handler,
269
- n_ctx=2048 # n_ctx should be increased to accomodate the image embedding
+ n_ctx=2048, # n_ctx should be increased to accomodate the image embedding
270
+ logits_all=True,# needed to make llava work
271
)
272
>>> llm.create_chat_completion(
273
messages = [
0 commit comments