Assigned
Status Update
Comments
va...@google.com <va...@google.com>
ar...@google.com <ar...@google.com> #2
Apologies, this was wrongly logged as a bug where as it must have been logged in as a Feature Request.
ar...@google.com <ar...@google.com> #3
Hello,
Thank you for reaching out to us with your request.
We have duly noted your feedback and will thoroughly validate it. While we cannot provide an estimated time of implementation or guarantee the fulfillment of the issue, please be assured that your input is highly valued. Your feedback enables us to enhance our products and services.
We appreciate your continued trust and support in improving our Google Cloud Platform products. In case you want to report a new issue, Please do not hesitate to create a new issue on the
Once again, we sincerely appreciate your valuable feedback; Thank you for your understanding and collaboration.
Description
Please provide as much information as possible. At least, this should include a description of your issue and steps to reproduce the problem. If possible please provide a summary of what steps or workarounds you have already tried, and any docs or articles you found (un)helpful.
Problem you have encountered:
status = StatusCode.INVALID_ARGUMENT
details = "`extractive_content_spec` must be not defined when the datastore is using 'chunking config'"
debug_error_string = "UNKNOWN:Error received from peer ipv4:
What you expected to happen:
Datastore chucking is required / recommended
Steps to reproduce:
from vertexai.generative_models import FunctionDeclaration, GenerativeModel, Part, Tool, Content, grounding
from vertexai.preview import generative_models as preview_generative_models
vertexai.init(project=PROJECT_ID, location='us-central1')
vertex_search_tool = Tool.from_retrieval(
retrieval=preview_generative_models.grounding.Retrieval(
source=preview_generative_models.grounding.VertexAISearch(
datastore=f"projects/{PROJECT_ID}/locations/global/collections/default_collection/dataStores/{DATASTORE_ID}"
),
)
)
model = GenerativeModel(
"gemini-1.0-pro",
generation_config={"temperature": 0}
)
chat = model.start_chat()
response = chat.send_message(PROMPT, tools=[vertex_search_tool])
Other information (workarounds you have tried, documentation consulted, etc):