You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy/troubleshoot.md
+30-40Lines changed: 30 additions & 40 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,83 +1,73 @@
1
-
# Troubleshoot guide
1
+
# CNC troubleshooting
2
2
3
-
Quick help Guide for some of the common issues that may arise.
3
+
This document explains how to troubleshoot issues that you may encounter while using the Citrix Kubernetes node controller (CNC). Using this document, you can collect logs to determine the causes and apply workarounds for some of the common issues related to the configuration of the CNC
4
4
5
-
To validate Citrix ADC and basic node configurations, Refer screenshots on[deployment](README.md) page.
5
+
To validate Citrix ADC and the basic node configurations, refer to the image on the[deployment](README.md) page.
6
6
7
7
### Service status DOWN
8
8
9
-
We can verify few things to debug the issue of services being in DOWN state.
9
+
To debug the issues when the service is in down state,
10
10
11
-
1. Verify logs of CNC pod using:
11
+
1. Verify the logs of the CNC pod using the following command::
12
12
13
13
```
14
14
kubectl logs <cnc-pod> -n <namespace>
15
15
```
16
16
17
-
Look for any "permission" ERRORs in logs. As stated in deployment page, CNC creates "kube-cnc-router" pods which need NET_ADMIN privilege to do the configurations on nodes. So, CNC serviceaccount must have
18
-
NET_ADMIN privilege and the ability to create HOST mode "kube-cnc-router" pods.
17
+
Check for any 'permission' errors in the logs. CNC creates kube-cnc-router pods which require NET_ADMIN privilege to perform the configurations on nodes. So, the CNC service account must have the NET_ADMIN privilege and the ability to create host mode kube-cnc-routerpods.
19
18
20
-
2. Verify logs of kube-cnc-router pod using:
19
+
2. Verify the logs of the kube-cnc-router pod using the following command:
21
20
22
21
```
23
22
kubectl logs <kube-cnc-pod> -n <namespace>
24
23
```
25
24
26
-
Look for any ERROR in node configuration. A typical router pod log would look like:
25
+
Check for any error in the node configuration. The following is a sample typical router pod log:
### Service status UP but ping from ADC not working
48
+
### If the service status is up and operational, but the ping from Citrix ADC is not working
49
+
50
+
If you are not able to ping the service IP address from Citrix ADC even though the services are in operational state. One reason may be the presence of a PBR entry which directs the packets from Citrix ADC with SRCIP as NSIP to the default gateway.
51
51
52
-
It is possible that user is unable to ping service IP from Citrix ADC even though services are in UP state.
53
-
One probable reason for this could be the presence of a PBR entry which directs the packets from ADC with SRCIP as NSIP to a default gateway.
54
-
It doesn't impact any functionality. User can ping with Citrix ADC's VTEP as source IP using -S option of ping command from Citrix ADC CLI prompt as seen below.
52
+
It does not impact any functionality. You can use the VTEP of Citrix ADC as source IP address using the -S option of ping command in the Citrix ADC command line interface. For example:
55
53
56
54
```
57
55
ping <serviceIP> -S <vtepIP>
58
56
```
57
+
Note: If it is necessary to ping with NSIP itself, then, you must remove the PBR entry or add a new PBR entry for the endpoint with high priority.
59
58
60
-
Note: If it is absolutely required to ping with NSIP itself, then as of now, user need to remove the PBR entry or add new PBR entry for endpoint with higher priority
61
-
62
-
### Curl to the pod endpoint/VIP not working
63
-
64
-
This is the case wherein though services are UP, still user can't curl to the pod endpoint,that means, stateful TCP session to endpoint fails.
65
-
One Probable reason for this could be the ns mode "MBF" set to Enable. This issue depends upon deployments and might occur only on certain versions of ADC.
66
-
67
-
To resolve this, Either:
68
-
- Disable MBF ns mode
69
-
or
70
-
- Bind a netprofile with netprofile Disabled to the servicegroup
71
-
72
-
Note: As of now, if disabling MBF resolves the issue, then it need to be kept disabled.
59
+
### cURL to the pod endpoint or VIP is not working
73
60
74
-
## Customer Support
61
+
Though, services are in up state, still you cannot cURL to the pod endpoint, that means, the stateful TCP session to the endpoint is failing. One reason may be the ns mode 'MBF' is set to enable. This issue depends upon deployments and might occur only on certain versions of Citrix ADC.
62
+
To resolve this issue, you should disable MBF ns mode or bind a netprofilewith the netprofile disabled to the servicegroup.
63
+
Note: If disabling the MBF resolves the issue, then it should be kept disabled.
75
64
76
-
As general support, while raising issues please provide following for faster debugging.
65
+
## Customer support
77
66
78
-
Do a curl/ping from ADC to endpoint and do some captures.
67
+
For general support, when you raise issues, provide the following details which help for faster debugging.
68
+
cURL or ping from Citrix ADC to the endpoint and get the details for the following:
79
69
80
-
For node:
70
+
For the node, provide the details for the following commands:
81
71
82
72
1. tcpdump capture on CNC interface on nodes
83
73
```
@@ -95,7 +85,7 @@ For node:
95
85
5. output of "iptables -L" on the node.
96
86
97
87
98
-
For ADC:
88
+
For ADC, provide the details for the following show commands:
0 commit comments