-
Notifications
You must be signed in to change notification settings - Fork 581
Issues: bentoml/OpenLLM
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
bug: error coming up while install the vllm using pip install "openllm[vllm]"
#967
opened Apr 25, 2024 by
Developer-atomic-amardeep
Deploying LLM in On-Premises Server to Assist Users to Launch Locally in Work Laptop - Web Browser
#934
opened Mar 18, 2024 by
sanket038
I'm having trouble getting statted with openllm, but I don't want to use conda and I have WSL2
#929
opened Mar 11, 2024 by
Lightwave234
bug: Error in sending post request for bentoml container service
#904
opened Feb 13, 2024 by
hahmad2008
bug: Requests with "use_beam_search: true" fail with an unclear exception message.
#903
opened Feb 13, 2024 by
yan-virin
How to update the prompt template without change openllm-core config
#895
opened Feb 11, 2024 by
hahmad2008
bug: Can't load local models with (No such file or directory) error
#894
opened Feb 8, 2024 by
hahmad2008
bug: Docker images with GPTQ quantized models do not have auto-gptq or optimum installed
#871
opened Jan 30, 2024 by
jeremyadamsfisher
Previous Next
ProTip!
Follow long discussions with comments:>50.