Skip to content

[CNIFailure] Health check for AWS IPVLAN found CNIFailure due to address reserved. #109

@sunya-ch

Description

@sunya-ch

Describe the bug
A clear and concise description of what the bug is.

Health check reports the following CNI Failure for aws-ipvlan (v1.1.0).
However, with CNI mechanism, it should keep reassigning the next ip address until the one that is not reserved.

    {
      "HostName": "<hostname>",
      "Connectivity": {
        "10.0.144.0/20": false
      },
      "Allocability": 0,
      "StatusCode": 602,
      "Status": "CNIFailure",
      "Message": "Failed to AssignIP: InvalidParameterValue: Address 10.0.144.1 is in subnet's reserved range.\n\tstatus code: 400, request id: xxx"
    }
# pod creation events (got 3 retires before success)

  Warning  FailedCreatePodSandBox  39s   kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multi-nic-iperf3-client_default_6483a267-5dfd-4ca0-896d-707b8c8869ae_0(860bd3e40b8fcfcb1de4d2b47483362a6b237ba748bf89bb4222ae01d6633a19): error adding pod default_multi-nic-iperf3-client to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [default/multi-nic-iperf3-client/6483a267-5dfd-4ca0-896d-707b8c8869ae:multinic-aws-ipvlan]: error adding container to network "multinic-aws-ipvlan": Failed to AssignIP: InvalidParameterValue: Address 10.0.144.1 is in subnet's reserved range.
           status code: 400, request id: 751fb907-59f7-4a17-9031-a888d3f822f0
  Normal   AddedInterface          39s  multus   Add eth0 [10.128.2.177/23] from ovn-kubernetes
  Warning  FailedCreatePodSandBox  37s  kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multi-nic-iperf3-client_default_6483a267-5dfd-4ca0-896d-707b8c8869ae_0(2de014d5c95871357d29e29ab4b73baea2584506bbf02511ef797d0c56acef76): error adding pod default_multi-nic-iperf3-client to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [default/multi-nic-iperf3-client/6483a267-5dfd-4ca0-896d-707b8c8869ae:multinic-aws-ipvlan]: error adding container to network "multinic-aws-ipvlan": Failed to AssignIP: InvalidParameterValue: Address 10.0.144.2 is in subnet's reserved range.
           status code: 400, request id: 3f34b056-6956-4b5a-9d32-cd7bdf939b20
  Normal   AddedInterface          24s  multus   Add eth0 [10.128.2.177/23] from ovn-kubernetes
  Warning  FailedCreatePodSandBox  22s  kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multi-nic-iperf3-client_default_6483a267-5dfd-4ca0-896d-707b8c8869ae_0(132f9fea6589753b4c0d44882c5343e6fb35af4df7a3073da6b9303feaf84d40): error adding pod default_multi-nic-iperf3-client to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [default/multi-nic-iperf3-client/6483a267-5dfd-4ca0-896d-707b8c8869ae:multinic-aws-ipvlan]: error adding container to network "multinic-aws-ipvlan": Failed to AssignIP: InvalidParameterValue: Address 10.0.144.3 is in subnet's reserved range.
           status code: 400, request id: b14f8fea-1807-4b2c-8121-c680aac9c409
  Normal   AddedInterface  8s  multus   Add eth0 [10.128.2.177/23] from ovn-kubernetes
  Normal   AddedInterface  7s  multus   Add net1 [10.0.144.4/20] from default/multinic-aws-ipvlan

Should fix the aws-ipvlan itself to list up the reserved IP in advanced or/and let the health-check available for some max retry for CNIFailure.

To Reproduce
Steps to reproduce the behavior:
1.
2.
3.
4.

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

  • manager container of controller and multi-nicd DS status:
  • multinicnetwork CR:
  • hostinterface list/CR:
  • cidr CR (multiNICIPAM: true):
  • ippools CR (multiNICIPAM: true):
  • log of manager container:
  • log of failed multi-nicd pod:

Environment (please complete the following information):

  • platform: [e.g. self-managed k8s, self-managed OpenShift, EKS, IKS, AKS]
  • node profile:
  • operator version :
  • cluster scale (number of nodes, pods, interfaces):

Additional context
Add any other context about the problem here.

Metadata

Metadata

Assignees

No one assigned

    Labels

    awsbugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions