Change theme
Help
Press space for more information.
Show links for this issue (Shortcut: i, l)
Copy issue ID
Previous Issue (Shortcut: k)
Next Issue (Shortcut: j)
Sign in to use full features.
Vote: I am impacted
Notification menu
Refresh (Shortcut: Shift+r)
Go home (Shortcut: u)
Pending code changes (auto-populated)
View issue level access limits(Press Alt + Right arrow for more information)
Request for new functionality
View staffing
Description
What you would like to accomplish:
Easier deployment to vertex of bigquery-ML models that use feature-store.
Currently, when training a BQML model WITHOUT using feature store, one can pass these parameters
and then deploy the model from the vertexAI model registry, by using just the webUI.
But once a feature store is added, one is forced to code quite a bit to add the feature store.
How this might work:
In bigquery-ML training, define a feature-view as parameter.
Then when deploying in vertexAI's webUI, use that feature store to map the inputs, then pass them to the model.
The inputs/outputs of the feature store's mapping are available if the query for training is constructed carefully
(join the BQ tables the feature-view is based on, don't re-use column names that show up in those tables).
To show how we would use it:
We run multiple models e.g. for purchase-probability-prediction and fraud detection.
We like to map e.g. the email-domain to bounce-rate, fraud-rate, frequency, etc.
These mappings are used in multiple models, and updating them weekly is of great benefit (e.g. to have data on newest OS version, latest fraud-rates, etc.).
Hence the feature store is ideal, except the deployment gets too hard.
If applicable, reasons why alternative solutions are not sufficient:
A no-code solution is very attractive to business intelligence teams, who are good at creating models, but not at writing deployable python code.
Our team is one example. And tabular models with good feature engineering (mapping by feature store, TRANSFORM feature) are our bread-and-butter!
Other information (workarounds you have tried, documentation consulted, etc):
A vertex-custom-prediction-routine is possible.
But the documentation is awful at the moment, main issues are/were:
A cloud run endpoint that adds feature-store data is possible (https://github.com/GoogleCloudPlatform/fraudfinder ), but too much code as well. It does deploy the TRANSFORM it seems. But 2 endpoints (cloud run + vertex) are more costly.
Comment
If feature store was deployed automatically, we would be spending money on this already. I believe there are more teams like ours.
On the other hand, there is still a severe bug in TRANSFORM's deployment that I already reported. So we can't migrate to this yet.