Assigned
Status Update
Comments
ci...@google.com <ci...@google.com> #2
Hello,
This issue report has been forwarded to the Cloud AI Platform team so that they may investigate it, but there is no ETA for a resolution today. Future updates regarding this issue will be provided here.
Description
Before I was using the Python client API/gcloud command line and I could change the model I wanted to run:
gcloud ai-platform local train \
--module-name=‘model.model_1.task’
and it is super easy to switch to =‘model.model_2.task’ or =‘model.model_3.task’
After I pack the exact same code in a custom container, I can run loccally in the following way:
docker run --entrypoint=python $IMAGE_URI -m model.model_1.task
or
docker run --entrypoint=python $IMAGE_URI -m model.model_2.task
and in my docker file I have a default entrypoint:
docker run --entrypoint=python $IMAGE_URI -m model.tf_bert_classification.task --verbosity_level=DEBUG
Today to training with AI Platform abd with the custom container, I need to "hardcode" the entrypoint directly in the container.
It will be great to be able to pass some part on the entrypoint as an argument like "model.model_1.task". Similar to "--module-name" when we use AI Platform runtime.
why ?: if I have 5 models in the same python module, I will need to create 5 custom containers only to be able to change the lin with the entry point in the Docker container.
Let me know if this is unclear or if you have better alternative