Fixed
Status Update
Comments
nu...@google.com <nu...@google.com>
ge...@google.com <ge...@google.com> #2
Comment has been deleted.
dh...@google.com <dh...@google.com>
dh...@google.com <dh...@google.com> #3
Hello,
I’m pleased to inform you that our product engineering team has resolved the reported issue. Please verify if the problem has been resolved from your end as well.
If you encounter any further issues or have any additional concerns, please don't hesitate to create a new issue on the
I will now proceed to close this issue. If you have any other questions or need further assistance, please feel free to let us know.
Thanks & Regards
Description
This will create a public issue which anybody can view and comment on.
Please provide as much information as possible. At least, this should include a description of your issue and steps to reproduce the problem. If possible please provide a summary of what steps or workarounds you have already tried, and any docs or articles you found (un)helpful.
Problem you have encountered: In some situations, when there are dozens of pods and cronjobs running at the same time, some [Container Logs] labels may be missing. This means that the
labels.k8s-pod/job-name
label may be missing for some logs.What you expected to happen: All of Kubernetes labels for a pod would be present for all its container logs.
Steps to reproduce:
labels.k8s-pod/job-name
.Other information (workarounds you have tried, documentation consulted, etc):
========
Reference:
[1]https://kubernetes.io/docs/concepts/workloads/pods/init-containers/