Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vertex AI - ModuleNotFoundError: No module named 'vertexai.preview.generative_models' #908

Open
BadLiveware opened this issue May 17, 2024 · 2 comments

Comments

@BadLiveware
Copy link

BadLiveware commented May 17, 2024

Trying to use this with vertexai running codiumai/pr-agent:0.21-gitlab_webhook

Config excerpt:

[config]
model = "vertex_ai/codechat-bison"
model_turbo = "vertex_ai/codechat-bison"
fallback_models="vertex_ai/codechat-bison"
Failed to generate prediction with vertex_ai/codechat-bison: Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/llms/vertex_ai.py", line 287, in completion
    from vertexai.preview.generative_models import (
ModuleNotFoundError: No module named 'vertexai.preview.generative_models'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 1628, in completion
    model_response = vertex_ai.completion(
  File "/usr/local/lib/python3.10/site-packages/litellm/llms/vertex_ai.py", line 676, in completion
    raise VertexAIError(status_code=500, message=str(e))
litellm.llms.vertex_ai.VertexAIError: No module named 'vertexai.preview.generative_models'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 277, in acompletion
    init_response = await loop.run_in_executor(None, func_with_context)
  File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 2727, in wrapper
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 2628, in wrapper
    result = original_function(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 2055, in completion
    raise exception_type(
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 8180, in exception_type
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 7397, in exception_type
    raise RateLimitError(
litellm.exceptions.RateLimitError: VertexAIException - No module named 'vertexai.preview.generative_models'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/pr_agent/algo/ai_handlers/litellm_ai_handler.py", line 145, in chat_completion
    response = await acompletion(**kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 3181, in wrapper_async
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 3017, in wrapper_async
    result = await original_function(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 296, in acompletion
    raise exception_type(
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 8180, in exception_type
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 7397, in exception_type
    raise RateLimitError(
litellm.exceptions.RateLimitError: VertexAIException - VertexAIException - No module named 'vertexai.preview.generative_models'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/pr_agent/algo/pr_processing.py", line 272, in retry_with_fallback_models
    return await f(model)
  File "/app/pr_agent/tools/pr_description.py", line 166, in _prepare_prediction
    self.prediction = await self._get_prediction(model)
  File "/app/pr_agent/tools/pr_description.py", line 190, in _get_prediction
    response, finish_reason = await self.ai_handler.chat_completion(
  File "/usr/local/lib/python3.10/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
    return await fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/tenacity/_asyncio.py", line 47, in __call__
    do = self.iter(retry_state=retry_state)
  File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.10/site-packages/tenacity/_asyncio.py", line 50, in __call__
    result = await fn(*args, **kwargs)
  File "/app/pr_agent/algo/ai_handlers/litellm_ai_handler.py", line 146, in chat_completion
    except (openai.APIError, openai.Timeout) as e:
TypeError: catching classes that do not inherit from BaseException is not allowed


@BadLiveware
Copy link
Author

BadLiveware commented May 17, 2024

Seems to be a known BerriAI/litellm#1463 and fixed by pip install "google-cloud-aiplatform>=1.38" so a dependency upgrade should fix it

@mrT23
Copy link
Collaborator

mrT23 commented May 17, 2024

feel free to open a PR to update this dependency

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants