You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 14, 2022. It is now read-only.
From this point on it is assumed that all commands are executed within the unpacked release folder. As it is important that the `.env' can be found by Docker-Compose.
112
-
If, as recommended, you run the solution on different machines, it is also assumed that you download and unpack the release package on each machine. And then provide the `.env` file.
113
-
Furthermore, it is recommended to store the .env as central configuration file in a version management system.
116
+
- From this point on it is assumed that all commands are executed within the unpacked release folder.
117
+
- As it is important that the `.env` can be found by Docker-Compose.
118
+
- If, as recommended, you run the solution on different machines, it is also assumed that you download and unpack the release package on each machine. And then provide the `.env` file.
119
+
- Furthermore, it is recommended to store the .env as central configuration file in a version management system.
114
120
115
121
Even if otherwise possible, it is recommended to deploy the individual components in the following order. For each component you can then check if it is functional.
116
122
117
-
####Elasticsearch
123
+
### Elasticsearch
118
124
119
-
Open the `.env` file and configure the ELASTICSEARCH_HOSTS. At this point please configure only one Elasticsearch node. You can start with a single node and add more nodes later. More about this topic [Multi-Node Deployment](#setup-elasticsearch-multi-node) later in the documenation.
120
-
This URL is used by all Elasticsearch clients (Logstash, API-Builder, Filebeat) of the solution to establish communication.
121
-
If you use an external Elasticsearch cluster, please specify which node(s) that are given to you.
125
+
Open the `.env` file and configure the ELASTICSEARCH_HOSTS. At this point please configure only one Elasticsearch node. You can start with a single node and add more nodes later. More about this topic [Multi-Node Deployment](#setup-elasticsearch-multi-node) later in the documenation.
126
+
This URL is used by all Elasticsearch clients (Logstash, API-Builder, Filebeat) of the solution to establish communication.
127
+
If you use an external Elasticsearch cluster, please specify the node(s) that are given to you.
122
128
Please keep in mind that the hostnames must be resolvable within the docker containers. You can also assign the cluster name here if the default: `axway-apim-elasticsearch` is not appropriate. Example:
@@ -151,7 +157,7 @@ GET https://my-elasticsearch-host.com:9200
151
157
```
152
158
At this point you can already add the cluster UUID to the `.env` (`ELASTICSEARCH_CLUSTER_UUID`) file. With that, the Single-Node Elasticsearch Cluster is up & running.
153
159
154
-
####Kibana
160
+
### Kibana
155
161
156
162
For Kibana all parameters are already stored in the .env file. Start Kibana with the following command:
157
163
```
@@ -167,7 +173,7 @@ At this point you can already import the sample dashboard: `kibana/Axway-APIM-Da
167
173
168
174
:exclamation: You are welcome to create additional visualizations and dashboards, but do not adapt the existing ones, as they will be overwritten with the next update.
169
175
170
-
####Logstash / API-Builder / Memcached
176
+
### Logstash / API-Builder / Memcached
171
177
172
178
It is recommended to deploy these components on one machine, so they are in a common Docker-Compose file and share the same network. Furthermore, a low latency between these components is beneficial. This allows you to use the default values for Memcached and API Builder. Therefore you only need to specify where the Admin-Node-Manager or the API manager can be found for this step. If necessary you have to specify an API-Manager admin user.
0 commit comments