You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 14, 2022. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+94-12Lines changed: 94 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,15 @@
1
1
# API-Management Traffic-Monitor based ELK stack
2
2
3
-
When having many API-Gateway instances with millions of requests the API-Gateway Traffic Monitor can become slow. The purpose of this project is to solve that performance issueand get other benefits by using a standard external datastore: Elasticsearch.
3
+
When having many API-Gateway instances with millions of requests the API-Gateway Traffic Monitor can become slow and the observation period quite short. The purpose of this project is to solve that performance issue, make it possible to observe a long time-frame and get other benefits by using a standard external datastore: [Elasticsearch](https://www.elastic.co/elasticsearch).
4
4
5
5
The overall architecture this project provides looks like this:
6
6
![Architecture][img1]
7
7
8
+
This also makes it possible to collect data from API-Gateways running all over the world into a centralized Elasticsearch instance to have it available with the best possible performance independing from the network performance.
9
+
It also helps, when running the Axway API-Gateway in Docker-Orchestration-Environment where containers are started and stopped as it avoids to loose data, when an API-Gateway container is stopped.
10
+
8
11
### How it works
9
-
Each API-Gateway instance is writing, [if configured](#enable-open-traffic-event-log), Open-Traffic Event-Log-Files, which are streamed by [Filebeat](https://www.elastic.co/beats/filebeat) into a Logstash-Instance. [Logstash](https://www.elastic.co/logstash) performs data pre-processing, combines different events and finally forwards the document into an [Elasticsearch](https://www.elastic.co/elasticsearch) cluster.
12
+
Each API-Gateway instance is writing, [if configured](#enable-open-traffic-event-log), Open-Traffic Event-Log-Files, which are streamed by [Filebeat](https://www.elastic.co/beats/filebeat) into a Logstash-Instance. [Logstash](https://www.elastic.co/logstash) performs data pre-processing, combines different events and finally forwards these so called documents into an Elasticsearch cluster.
10
13
11
14
Once the data is indexed by Elasticsearch it can be used by different clients. This process allows almost realtime monitoring of incoming requests. It takes around 5 seconds until a request is available in Elasticsearch.
The Loginspector is a new separated user-interface with very basic set of functionilties. As part of the project the Loginspector is activated by default when using `docker-compose up -d`. If you don't wanna use it, it can be disabled by commenting out the following lines in the docker-compose.yml file:
25
+
The Loginspector is a new separated user-interface with very basic set of functionalities. As part of the project the Loginspector is activated by default when using `docker-compose up -d`. If you don't wanna use it, it can be disabled by commenting out the following lines in the docker-compose.yml file:
23
26
```yaml
24
27
nginx:
25
28
image: nginx:1.17.6
@@ -36,34 +39,34 @@ The Loginspector is a new separated user-interface with very basic set of functi
36
39
```
37
40
The Log-Inspector is accessible on the following URL: `http://hostname-to-your-docker-machine:8888/logspector.html`
38
41
39
-
![Log-Spector][img5]
42
+
![Log-Inspector][img5]
40
43
41
44
42
45
## Prerequisites
43
46
For a simple deployment the prerequisites are very simple as all services can be started as a Docker-Container. In order to start all components in PoC-Like-Mode you just need:
44
47
45
48
1. A Docker engine
46
-
2. docker-compose installed
49
+
2. Docker-compose
47
50
3. An API-Management Version >7.7-20200130
48
-
- Versin 7.7-20200130 is required due to some Dateform changes in the Open-Traffic-Format. With older versions of the API-Gateway you will get an error in Logstash processing.
51
+
- Versin 7.7-20200130 is required due to some Dateformat changes in the Open-Traffic-Format. With older versions of the API-Gateway you will get an error in Logstash processing.
49
52
50
-
Using the provided docker-compose is good to play with, however this approach is not recommended for production environments. Depending the load a dedicated machine (node) for Elasticsearch is recommended. The default configuration is prepared to scale up to five Elasticsearch nodes, which can handle millions of requests. To run Logstash and the API-Builder service a Docker-Orchestration framework is recommended as you get monitoring, self-healing, elasticity.
53
+
Using the provided docker-compose is good to play with, however this approach is not recommended for production environments. Depending on the load, a dedicated machine (node) for Elasticsearch is recommended. The default configuration is prepared to scale up to five Elasticsearch nodes, which can handle millions of requests. To run Logstash and the API-Builder service a Docker-Orchestration framework is recommended as you get monitoring, self-healing, elasticity and more.
51
54
52
55
## Installation / Configuration
53
-
To run the components in a PoC-Like mode, the recommended way is to clone this project onto a machine having docker and docker-compose installed. Also this machine must have file-based access to the running API-Gateway instance, as the Filebeat docker container will mount the open-traffic folder into the docker-container.
56
+
To run the components in a PoC-Like mode, the recommended way is to clone this project onto a machine having docker and docker-compose installed. Also this machine must have file-based access to the running API-Gateway instance, as the Filebeat docker container will mount the open-traffic folder into the container.
This creates a local copy of the repository and you can start from there.
58
61
59
62
### Enable Open-Traffic Event Log
60
63
Obviously you have to enable Open-Traffic-Event log for your API-Gateway instances. [Read here][1] how to enable the Open-Traffic Event-Log.
61
-
After this configuration has been done, Open-Traffic log-files will created by default in this location: `apigateway/logs/opentraffic`. This location becomes relevant in the next step, when configuring Filebeat.
64
+
After this configuration has been done, Open-Traffic log-files will created by default in this location: `apigateway/logs/opentraffic`. This location becomes relevant when configuring Filebeat.
62
65
63
66
### Configure the Admin-Node-Manager
64
-
This step is required if you would like to use the existing Traffic-Monitor in combination Elasticsearch.
65
-
The Admin-Node-Manager (listening by default on port 8090) is responsible to server the API-Manager Traffic-Monitor and needs to be configured to use the API-Builder API instead.
66
-
For the following steps, please open the Admin-Node-Manager configuration in Policy-Studio. You can [here](https://docs.axway.com/bundle/axway-open-docs/page/docs/apim_administration/apigtw_admin/general_rbac_ad_ldap/index.html#use-the-ldap-policy-to-protect-management-services) how to do that.
67
+
This step is required if you would like to use the existing API-Gateway Manager Traffic-Monitor in combination Elasticsearch.
68
+
The Admin-Node-Manager (listening by default on port 8090) is responsible to serve the Traffic-Monitor and needs to be configured to use the API-Builder REST-API instead.
69
+
For the following steps, please open the Admin-Node-Manager configuration in Policy-Studio. You can read [here](https://docs.axway.com/bundle/axway-open-docs/page/docs/apim_administration/apigtw_admin/general_rbac_ad_ldap/index.html#use-the-ldap-policy-to-protect-management-services) how to do that.
67
70
- Create a new policy called: `Use Elasticsearch API`
68
71
- Configure this policy like so:
69
72
![use ES API][img3]
@@ -110,6 +113,85 @@ Of course, the components can also run on different machines or on a Docker-Orch
110
113
docker-compose down
111
114
````
112
115
116
+
## Troubleshooting
117
+
#### Check processes/containers are running
118
+
From with the folder where the docker-compose.yml file is located run
Make sure, the Filebeat Harvester is started on the Open-Traffic-Files:
136
+
```
137
+
INFO log/harvester.go:251 Harvester started for file: /var/log/work/group-2_instance-1_traffic.log
138
+
```
139
+
The following error means, Logstash is not running or reachable:
140
+
```
141
+
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://logstash:5044)): lookup logstash on 127.0.0.11:53: no such host
142
+
```
143
+
General note: You don't see Filebeat telling you, when it is successfully processing your log-files. When the Harvester process is started and you don't see any errors, you can assume your files are processed.
144
+
145
+
#### Check Logstash processing
146
+
Logstash write to Stdout, hence you can view information just with:
[INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
167
+
```
168
+
Once, Logstash is successfully processing data you see them flying by in the log output.
169
+
170
+
#### Check Elasticsearch processing
171
+
It takes a while until Elasticsearch is finally started and reports it with the following line:
172
+
```
173
+
docker logs elasticsearch1 --follow
174
+
```
175
+
When Elasticsearch is finally started:
176
+
```
177
+
"level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch1", "message": "Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana_1][0]]]).", "cluster.uuid": "k22kMiq4R12I7BSTD87n5Q", "node.id": "6TVkdA-YR7epgV39dZNG2g" }
178
+
```
179
+
Status Yellow is expected when running Elasticsearch on a single node, as it can achieve the desired replicas. You may use Kibana Development tools or curl to get additional information.
0 commit comments