-
Michelle Nguyen authored
Summary: We ran into this issue in both customer prod clusters, where the cache was unable to flush because we kept trying to flush an entry that was larger than the maximum request size that we can send over etcd. we already try to batch etcd operations so that the request is not larger than the maximum request size. however, this single operation is already larger than the max request we can send. i think we need more information to determine how to handle this case. - if it is a k8s update, which we got directly from the k8s api, we should consider increasing the maximum request size that etcd permits, since most likely we don't want to cut out any information in this message. - if it is our own entry (agents, schemas, etc), then we should consider looking into ways we can shrink the message or split it into separate entries. to figure this out, i just added a log for the next time the error occurs. Test Plan: n/a Reviewers: nserrino, zasgar, #engineering Reviewed By: nserrino, #engineering Differential Revision: https://phab.corp.pixielabs.ai/D6329 GitOrigin-RevId: 679725b32c2818cc5634bedfdb91d4f526c2c770
c9d1e538