SRv6 (Segment Routing On IPv6) Implementation Of K8s Services
- Notifications You must be signed in to change notification settings
- Fork 114
- Star 249
- Code
- Issues 29
- Pull requests 26
- Actions
- Projects 0
- Wiki
- Security
- Insights
Files
masterBreadcrumbs
- vpp
- docs
- setup
Latest commit
History
History141 lines (131 loc) · 5.78 KB masterBreadcrumbs
- vpp
- docs
- setup
File metadata and controls
- Preview
- Code
- Blame
SRv6 provides an experimental way of implementing k8s services in IPv6 deployments of Contiv.
Since SRv6 is Segment Routing on IPv6 protocol (see RFC8402), you must enable the IPv6 in Contiv to be able to use the SRv6 service renderer.
Additionally, you must enable SRv6Interconnect in the manifest.yaml together with noOverlay mode:
apiVersion: v1 kind: ConfigMap metadata: name: contiv-agent-cfg namespace: kube-system data: contiv.conf: |- useNoOverlay: true useSRv6Interconnect: true ...manifest.yaml file is generated by Helm, so you can alternatively change the helm template input values in values.yaml (or values-arm64.yaml)
... contiv: useNoOverlay: true useSRv6Interconnect: true ...and generate manifest.yaml with Helm.
The basic idea behind segment routing is to cut the packet route into smaller routes, called segments. The SRv6 implementation of service will use these segments to properly route packet to service backends. The packet flow looks like this:
The user of service is pod 1. The packet with destination of the service IP address is steered into the SRv6 policy. The policy contains 1 path (list of segments) to each backend. The weighted loadbalancing happens (all routes have equal waight) and one segment list is used.
- if pod 2 on node 1 is chosen: The route consists only of one segment, the segment with segment id starting with 6666 (segment id is an ipv6 address). The IPv6 routing forwards the packet to the segment end (LocalSid-DX6) that decapsulates the packet (it was encapsulated in the policy, think of it as tunnel) and crossconnect it to the interface to pod 2 (using IPv6 as next hop)
- if the host backend on node 1 is chosen: Basically the same, but IPv6 routing forwards it to different place where LocalSid is located.
- if pod2 on node 2 is chosen: The route consists of 2 segments. The first segment will transport the packet to correct node (segment end in Localsid-End), but the segment end will not decapsulate packet but route it to the next segment end. The second segment end will decapsulate the packet and route it to the correct backend pod as in previous case.
- if the host on node 2 is chosen: Similar to previous cases.
Special case for SRv6 service is when service is used from host:
The loadbalancing is not done by using SRv6, but is done in the k8s proxy. So basically we fallback to the ipv6 routing to chosen backend. In case of local backend, the ipv6 routing will handle it without using the srv6 components. In the case of remote backend, the srv6 is used to transport packet to the correct node, but there the pure ipv6 takes routing to the correct backend.
The path of the packet returning from the backend in all the SRv6 service cases looks basically as in the host special case when it went to the remote backend: the srv6 handles only the node-to-node communication and the rest is handled by the pure ipv6 routing.
In case of problems, you can check the vswitch logs for the setting of steering, policy and localsids in the transactions:
- key: config/vpp/srv6/v2/localsid/6666:0:0:1::5 val: { sid:"6666:0:0:1::5" installation_vrf_id:1 end_function_DX6:<outgoing_interface:"vpp-tap-d6087568be9f59aba028955c19f5684055a69d926ed89f720fe187a" next_hop:"2001:0:0:1::5" > } - key: config/vpp/srv6/v2/policy/5555::d765 val: { bsid:"5555::d765" srh_encapsulation:true segment_lists:<weight:1 segments:"6666:0:0:1::5" > } - key: config/vpp/srv6/v2/steering/forK8sService-default-myservice val: { name:"forK8sService-default-myservice" policy_bsid:"5555::d765" l3_traffic:<installation_vrf_id:1 prefix_address:"2096::d765/128" > }or look directly into the vpp using CLI and list the installed srv6 components:
vpp# sh sr steering-policies SR steering policies: Traffic SR policy BSID L3 2096::a/128 5555::a L3 2096::5ce9/128 5555::5ce9 L3 2096::1/128 5555::1 L3 2096::6457/128 5555::6457 L3 2096::d765/128 5555::d765 vpp# sh sr policies SR policies: [0].- BSID: 5555::a Behavior: Encapsulation Type: Default FIB table: 0 Segment Lists: [0].- < 6666:0:0:1::2 > weight: 1 [1].- < 6666:0:0:1::3 > weight: 1 ----------- [1].- BSID: 5555::5ce9 Behavior: Encapsulation Type: Default FIB table: 0 Segment Lists: [2].- < 6655::1 > weight: 1 ----------- ... ----------- [4].- BSID: 5555::d765 Behavior: Encapsulation Type: Default FIB table: 0 Segment Lists: [5].- < 6666:0:0:1::5 > weight: 1 [6].- < 6666:0:0:1::6 > weight: 1 ----------- ... vpp# sh sr localsids SRv6 - My LocalSID Table: ========================= Address: 7766:f00d::1 Behavior: End Good traffic: [0 packets : 0 bytes] Bad traffic: [0 packets : 0 bytes] -------------------- ... -------------------- Address: 6666:0:0:1::5 Behavior: DX6 (Endpoint with decapsulation and IPv6 cross-connect) Iface: tap4 Next hop: 2001:0:0:1::5 Good traffic: [0 packets : 0 bytes] Bad traffic: [0 packets : 0 bytes] -------------------- You can’t perform that action at this time.Từ khóa » Vpp Segment Routing
-
The Fast Data Project () - Segment Routing
-
VPP: SR-MPLS: Segment Routing For MPLS - Docs Tree
-
VPP: SRv6: Segment Routing For IPv6
-
Interconnecting VPCs With Segment Routing & Performance ...
-
VPP/Segment Routing For IPv6 - Birost
-
Org.opendaylight.n.v1.urn.rams.xml.ns ... - Tabnine
-
VPP Scope And Processing Tree (the Nodes Used In ... - ResearchGate
-
Multi-Cloud Chaining With Segment Routing - IEEE Xplore
-
Chaining Your Virtual Private Clouds With Segment Routing
-
[PDF] Linux SRv6 实战(第三篇):多云环境下Overlay(VPP)和Underlay整合 ...
-
[PDF] Multi-Cloud Chaining With Segment Routing
-
[PDF] A Performance Evaluation Framework For IPv6 Segment Routing
-
Plugin Overview - Ligato Docs
-
Rethinking Kubernetes Networking With SRv6 And Contiv-VPP