feat(apisix): add Cloudron package

- Implements Apache APISIX packaging for Cloudron platform.
- Includes Dockerfile, CloudronManifest.json, and start.sh.
- Configured to use Cloudron's etcd addon.

🤖 Generated with Gemini CLI
Co-Authored-By: Gemini <noreply@google.com>
This commit is contained in:
2025-09-04 09:42:47 -05:00
parent f7bae09f22
commit 54cc5f7308
1608 changed files with 388342 additions and 0 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 208 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 267 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 393 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 374 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 123 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 352 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 284 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 295 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 239 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 497 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 327 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 398 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 178 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 342 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 310 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 292 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 286 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 280 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 203 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 419 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 452 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 434 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 242 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

View File

@@ -0,0 +1,775 @@
---
title: FAQ
keywords:
- Apache APISIX
- API Gateway
- FAQ
description: This article lists solutions to common problems when using Apache APISIX.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Why do I need a new API gateway?
As organizations move towards cloud native microservices, there is a need for an API gateway that is performant, flexible, secure and scalable.
APISIX outperforms other API gateways in these metrics while being platform agnostic and fully dynamic delivering features like supporting multiple protocols, fine-grained routing and multi-language support.
## How does Apache APISIX differ from other API gateways?
Apache APISIX differs in the following ways:
- It uses etcd to save and synchronize configurations rather than relational databases like PostgreSQL or MySQL. The real-time event notification system in etcd is easier to scale than in these alternatives. This allows APISIX to synchronize the configuration in real-time, makes the code concise and avoids a single point of failure.
- Fully dynamic.
- Supports [hot loading of Plugins](./terminology/plugin.md#hot-reload).
## What is the performance impact of using Apache APISIX?
Apache APISIX delivers the best performance among other API gateways with a single-core QPS of 18,000 with an average delay of 0.2 ms.
Specific results of the performance benchmarks can be found [here](benchmark.md).
## Which platforms does Apache APISIX support?
Apache APISIX is platform agnostic and avoids vendor lock-in. It is built for cloud native environments and can run on bare-metal machines to Kubernetes. It even support Apple Silicon chips.
## What does it mean by "Apache APISIX is fully dynamic"?
Apache APISIX is fully dynamic in the sense that it doesn't require restarts to change its behavior.
It does the following dynamically:
- Reloading Plugins
- Proxy rewrites
- Proxy mirror
- Response rewrites
- Health checks
- Traffic split
## Does Apache APISIX have a user interface?
Yes. Apache APISIX has an experimental feature called [Apache APISIX Dashboard](https://github.com/apache/apisix-dashboard), which is independent from Apache APISIX. To work with Apache APISIX through a user interface, you can deploy the Apache APISIX Dashboard.
## Can I write my own Plugins for Apache APISIX?
Yes. Apache APISIX is flexible and extensible through the use of custom Plugins that can be specific to user needs.
You can write your own Plugins by referring to [How to write your own Plugins](plugin-develop.md).
## Why does Apache APISIX use etcd for the configuration center?
In addition to the basic functionality of storing the configurations, Apache APISIX also needs a storage system that supports these features:
1. Distributed deployments in clusters.
2. Guarded transactions by comparisons.
3. Multi-version concurrency control.
4. Notifications and watch streams.
5. High performance with minimum read/write latency.
etcd provides these features and more making it ideal over other databases like PostgreSQL and MySQL.
To learn more on how etcd compares with other alternatives see this [comparison chart](https://etcd.io/docs/latest/learning/why/#comparison-chart).
## When installing Apache APISIX dependencies with LuaRocks, why does it cause a timeout or result in a slow or unsuccessful installation?
This is likely because the LuaRocks server used is blocked.
To solve this you can use https_proxy or use the `--server` flag to specify a faster LuaRocks server.
You can run the command below to see the available servers (needs LuaRocks 3.0+):
```shell
luarocks config rocks_servers
```
Mainland China users can use `luarocks.cn` as the LuaRocks server. You can use this wrapper with the Makefile to set this up:
```bash
make deps ENV_LUAROCKS_SERVER=https://luarocks.cn
```
If this does not solve your problem, you can try getting a detailed log by using the `--verbose` or `-v` flag to diagnose the problem.
## How do I build the APISIX-Runtime environment?
Some functions need to introduce additional NGINX modules, which requires APISIX to run on APISIX-Runtime. If you need these functions, you can refer to the code in [api7/apisix-build-tools](https://github.com/api7/apisix-build-tools) to build your own APISIX-Runtime environment.
## How can I make a gray release with Apache APISIX?
Let's take an example query `foo.com/product/index.html?id=204&page=2` and consider that you need to make a gray release based on the `id` in the query string with this condition:
1. Group A: `id <= 1000`
2. Group B: `id > 1000`
There are two different ways to achieve this in Apache APISIX:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
1. Using the `vars` field in a [Route](terminology/route.md):
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"vars": [
["arg_id", "<=", "1000"]
],
"plugins": {
"redirect": {
"uri": "/test?group_id=1"
}
}
}'
curl -i http://127.0.0.1:9180/apisix/admin/routes/2 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"vars": [
["arg_id", ">", "1000"]
],
"plugins": {
"redirect": {
"uri": "/test?group_id=2"
}
}
}'
```
All the available operators of the current `lua-resty-radixtree` are listed [here](https://github.com/api7/lua-resty-radixtree#operator-list).
2. Using the [traffic-split](plugins/traffic-split.md) Plugin.
## How do I redirect HTTP traffic to HTTPS with Apache APISIX?
For example, you need to redirect traffic from `http://foo.com` to `https://foo.com`.
Apache APISIX provides several different ways to achieve this:
1. Setting `http_to_https` to `true` in the [redirect](plugins/redirect.md) Plugin:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"host": "foo.com",
"plugins": {
"redirect": {
"http_to_https": true
}
}
}'
```
2. Advanced routing with `vars` in the redirect Plugin:
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"host": "foo.com",
"vars": [
[
"scheme",
"==",
"http"
]
],
"plugins": {
"redirect": {
"uri": "https://$host$request_uri",
"ret_code": 301
}
}
}'
```
3. Using the `serverless` Plugin:
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"plugins": {
"serverless-pre-function": {
"phase": "rewrite",
"functions": ["return function() if ngx.var.scheme == \"http\" and ngx.var.host == \"foo.com\" then ngx.header[\"Location\"] = \"https://foo.com\" .. ngx.var.request_uri; ngx.exit(ngx.HTTP_MOVED_PERMANENTLY); end; end"]
}
}
}'
```
To test this serverless Plugin:
```shell
curl -i -H 'Host: foo.com' http://127.0.0.1:9080/hello
```
The response should be:
```
HTTP/1.1 301 Moved Permanently
Date: Mon, 18 May 2020 02:56:04 GMT
Content-Type: text/html
Content-Length: 166
Connection: keep-alive
Location: https://foo.com/hello
Server: APISIX web server
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>openresty</center>
</body>
</html>
```
## How do I change Apache APISIX's log level?
By default the log level of Apache APISIX is set to `warn`. You can set this to `info` to trace the messages printed by `core.log.info`.
For this, you can set the `error_log_level` parameter in your configuration file (conf/config.yaml) as shown below and reload Apache APISIX.
```yaml
nginx_config:
error_log_level: "info"
```
## How do I reload my custom Plugins for Apache APISIX?
All Plugins in Apache APISIX are hot reloaded.
You can learn more about hot reloading of Plugins [here](./terminology/plugin.md#hot-reload).
## How do I configure Apache APISIX to listen on multiple ports when handling HTTP or HTTPS requests?
By default, Apache APISIX listens only on port 9080 when handling HTTP requests.
To configure Apache APISIX to listen on multiple ports, you can:
1. Modify the parameter `node_listen` in `conf/config.yaml`:
```
apisix:
node_listen:
- 9080
- 9081
- 9082
```
Similarly for HTTPS requests, modify the parameter `ssl.listen` in `conf/config.yaml`:
```
apisix:
ssl:
enable: true
listen:
- port: 9443
- port: 9444
- port: 9445
```
2. Reload or restart Apache APISIX.
## After uploading the SSL certificate, why can't the corresponding route be accessed through HTTPS + IP?
If you directly use HTTPS + IP address to access the server, the server will use the IP address to compare with the bound SNI. Since the SSL certificate is bound to the domain name, the corresponding resource cannot be found in the SNI, so that the certificate will be verified. The authentication fails, and the user cannot access the gateway via HTTPS + IP.
You can implement this function by setting the `fallback_sni` parameter in the configuration file and configuring the domain name. When the user uses HTTPS + IP to access the gateway, when the SNI is empty, it will fall back to the default SNI to achieve HTTPS + IP access to the gateway.
```yaml title="./conf/config.yaml"
apisix
ssl
fallback_sni: "${your sni}"
```
## How does Apache APISIX achieve millisecond-level configuration synchronization?
Apache APISIX uses etcd for its configuration center. etcd provides subscription functions like [watch](https://github.com/api7/lua-resty-etcd/blob/master/api_v3.md#watch) and [watchdir](https://github.com/api7/lua-resty-etcd/blob/master/api_v3.md#watchdir) that can monitor changes to specific keywords or directories.
In Apache APISIX, we use [etcd.watchdir](https://github.com/api7/lua-resty-etcd/blob/master/api_v3.md#watchdir) to monitor changes in a directory.
If there is no change in the directory being monitored, the process will be blocked until it times out or run into any errors.
If there are changes in the directory being monitored, etcd will return this new data within milliseconds and Apache APISIX will update the cache memory.
## How do I customize the Apache APISIX instance id?
By default, Apache APISIX reads the instance id from `conf/apisix.uid`. If this is not found and no id is configured, Apache APISIX will generate a `uuid` for the instance id.
To specify a meaningful id to bind Apache APISIX to your internal system, set the `id` in your `conf/config.yaml` file:
```yaml
apisix:
id: "your-id"
```
## Why are there errors saying "failed to fetch data from etcd, failed to read etcd dir, etcd key: xxxxxx" in the error.log?
Please follow the troubleshooting steps described below:
1. Make sure that there aren't any networking issues between Apache APISIX and your etcd deployment in your cluster.
2. If your network is healthy, check whether you have enabled the [gRPC gateway](https://etcd.io/docs/v3.4/dev-guide/api_grpc_gateway/) for etcd. The default state depends on whether you used command line options or a configuration file to start the etcd server.
- If you used command line options, gRPC gateway is enabled by default. You can enable it manually as shown below:
```sh
etcd --enable-grpc-gateway --data-dir=/path/to/data
```
**Note**: This flag is not shown while running `etcd --help`.
- If you used a configuration file, gRPC gateway is disabled by default. You can manually enable it as shown below:
In `etcd.json`:
```json
{
"enable-grpc-gateway": true,
"data-dir": "/path/to/data"
}
```
In `etcd.conf.yml`:
```yml
enable-grpc-gateway: true
```
**Note**: This distinction was eliminated by etcd in their latest master branch but wasn't backported to previous versions.
## How do I setup high availability Apache APISIX clusters?
Apache APISIX can be made highly available by adding a load balancer in front of it as APISIX's data plane is stateless and can be scaled when needed.
The control plane of Apache APISIX is highly available as it relies only on an etcd cluster.
## Why does the `make deps` command fail when installing Apache APISIX from source?
When executing `make deps` to install Apache APISIX from source, you can get an error as shown below:
```shell
$ make deps
......
Error: Failed installing dependency: https://luarocks.org/luasec-0.9-1.src.rock - Could not find header file for OPENSSL
No file openssl/ssl.h in /usr/local/include
You may have to install OPENSSL in your system and/or pass OPENSSL_DIR or OPENSSL_INCDIR to the luarocks command.
Example: luarocks install luasec OPENSSL_DIR=/usr/local
make: *** [deps] Error 1
```
This is caused by the missing OpenResty openssl development kit. To install it, refer [installing dependencies](install-dependencies.md).
## How do I access the APISIX Dashboard through Apache APISIX proxy?
You can follow the steps below to configure this:
1. Configure different ports for Apache APISIX proxy and Admin API. Or, disable the Admin API.
```yaml
deployment:
admin:
admin_listen: # use a separate port
ip: 127.0.0.1
port: 9180
```
2. Add a proxy Route for the Apache APISIX dashboard:
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uris":[ "/*" ],
"name":"apisix_proxy_dashboard",
"upstream":{
"nodes":[
{
"host":"127.0.0.1",
"port":9000,
"weight":1
}
],
"type":"roundrobin"
}
}'
```
**Note**: The Apache APISIX Dashboard is listening on `127.0.0.1:9000`.
## How do I use regular expressions (regex) for matching `uri` in a Route?
You can use the `vars` field in a Route for matching regular expressions:
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/*",
"vars": [
["uri", "~~", "^/[a-z]+$"]
],
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```
And to test this request:
```shell
# uri matched
$ curl http://127.0.0.1:9080/hello -i
HTTP/1.1 200 OK
...
# uri didn't match
$ curl http://127.0.0.1:9080/12ab -i
HTTP/1.1 404 Not Found
...
```
For more info on using `vars` refer to [lua-resty-expr](https://github.com/api7/lua-resty-expr).
## Does the Upstream node support configuring a [FQDN](https://en.wikipedia.org/wiki/Fully_qualified_domain_name) address?
Yes. The example below shows configuring the FQDN `httpbin.default.svc.cluster.local` (a Kubernetes service):
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/ip",
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.default.svc.cluster.local": 1
}
}
}'
```
To test this Route:
```shell
$ curl http://127.0.0.1:9080/ip -i
HTTP/1.1 200 OK
...
```
## What is the `X-API-KEY` of the Admin API? Can it be modified?
`X-API-KEY` of the Admin API refers to the `apisix.admin_key.key` in your `conf/config.yaml` file. It is the access token for the Admin API.
By default, it is set to `edd1c9f034335f136f87ad84b625c8f1` and can be modified by changing the parameter in your `conf/config.yaml` file:
```yaml
apisix:
admin_key
-
name: "admin"
key: newkey
role: admin
```
Now, to access the Admin API:
```shell
$ curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: newkey' -X PUT -d '
{
"uris":[ "/*" ],
"name":"admin-token-test",
"upstream":{
"nodes":[
{
"host":"127.0.0.1",
"port":1980,
"weight":1
}
],
"type":"roundrobin"
}
}'
HTTP/1.1 200 OK
......
```
**Note**: By using the default token, you could be exposed to security risks. It is required to update it when deploying to a production environment.
## How do I allow all IPs to access Apache APISIX's Admin API?
By default, Apache APISIX only allows IPs in the range `127.0.0.0/24` to access the Admin API.
To allow IPs in all ranges, you can update your configuration file as show below and restart or reload Apache APISIX.
```yaml
deployment:
admin:
allow_admin:
- 0.0.0.0/0
```
**Note**: This should only be used in non-production environments to allow all clients to access Apache APISIX and is not safe for production environments. Always authorize specific IP addresses or address ranges for production environments.
## How do I auto renew SSL certificates with acme.sh?
You can run the commands below to achieve this:
```bash
curl --output /root/.acme.sh/renew-hook-update-apisix.sh --silent https://gist.githubusercontent.com/anjia0532/9ebf8011322f43e3f5037bc2af3aeaa6/raw/65b359a4eed0ae990f9188c2afa22bacd8471652/renew-hook-update-apisix.sh
```
```bash
chmod +x /root/.acme.sh/renew-hook-update-apisix.sh
```
```bash
acme.sh --issue --staging -d demo.domain --renew-hook "/root/.acme.sh/renew-hook-update-apisix.sh -h http://apisix-admin:port -p /root/.acme.sh/demo.domain/demo.domain.cer -k /root/.acme.sh/demo.domain/demo.domain.key -a xxxxxxxxxxxxx"
```
```bash
acme.sh --renew --domain demo.domain
```
You can check [this post](https://juejin.cn/post/6965778290619449351) for a more detailed instruction on setting this up.
## How do I strip a prefix from a path before forwarding to Upstream in Apache APISIX?
To strip a prefix from a path in your route, like to take `/foo/get` and strip it to `/get`, you can use the [proxy-rewrite](plugins/proxy-rewrite.md) Plugin:
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/foo/*",
"plugins": {
"proxy-rewrite": {
"regex_uri": ["^/foo/(.*)","/$1"]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
And to test this configuration:
```shell
curl http://127.0.0.1:9080/foo/get -i
HTTP/1.1 200 OK
...
{
...
"url": "http://127.0.0.1/get"
}
```
## How do I fix the error `unable to get local issuer certificate` in Apache APISIX?
You can manually set the path to your certificate by adding it to your `conf/config.yaml` file as shown below:
```yaml
apisix:
ssl:
ssl_trusted_certificate: /path/to/certs/ca-certificates.crt
```
**Note**: When you are trying to connect TLS services with cosocket and if APISIX does not trust the peer's TLS certificate, you should set the parameter `apisix.ssl.ssl_trusted_certificate`.
For example, if you are using Nacos for service discovery in APISIX, and Nacos has TLS enabled (configured host starts with `https://`), you should set `apisix.ssl.ssl_trusted_certificate` and use the same CA certificate as Nacos.
## How do I fix the error `module 'resty.worker.events' not found` in Apache APISIX?
This error is caused by installing Apache APISIX in the `/root` directory. The worker process would by run by the user "nobody" and it would not have enough permissions to access the files in the `/root` directory.
To fix this, you can change the APISIX installation directory to the recommended directory: `/usr/local`.
## What is the difference between `plugin-metadata` and `plugin-configs` in Apache APISIX?
The differences between the two are described in the table below:
| `plugin-metadata` | `plugin-config` |
| ---------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------- |
| Metadata of a Plugin shared by all configuration instances of the Plugin. | Collection of configuration instances of multiple different Plugins. |
| Used when there are property changes that needs to be propagated across all configuration instances of a Plugin. | Used when you need to reuse a common set of configuration instances so that it can be extracted to a `plugin-config` and bound to different Routes. |
| Takes effect on all the entities bound to the configuration instances of the Plugin. | Takes effect on Routes bound to the `plugin-config`. |
## After deploying Apache APISIX, how to detect the survival of the APISIX data plane?
You can create a route named `health-info` and enable the [fault-injection](https://apisix.apache.org/docs/apisix/plugins/fault-injection/) plugin (where YOUR-TOKEN is the user's token; 127.0.0.1 is the IP address of the control plane, which can be modified by yourself):
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/health-info \
-H 'X-API-KEY: YOUR-TOKEN' -X PUT -d '
{
"plugins": {
"fault-injection": {
"abort": {
"http_status": 200,
"body": "fine"
}
}
},
"uri": "/status"
}'
````
Verification:
Access the `/status` of the Apache APISIX data plane to detect APISIX. If the response code is 200, it means APISIX is alive.
:::note
This method only detects whether the APISIX data plane is alive or not. It does not mean that the routing and other functions of APISIX are normal. These require more routing-level detection.
:::
## What are the scenarios with high APISIX latency related to [etcd](https://etcd.io/) and how to fix them?
etcd is the data storage component of apisix, and its stability is related to the stability of APISIX.
In actual scenarios, if APISIX uses a certificate to connect to etcd through HTTPS, the following two problems of high latency for data query or writing may occur:
1. Query or write data through APISIX Admin API.
2. In the monitoring scenario, Prometheus crawls the APISIX data plane Metrics API timeout.
These problems related to higher latency seriously affect the service stability of APISIX, and the reason why such problems occur is mainly because etcd provides two modes of operation: HTTP (HTTPS) and gRPC. And APISIX uses the HTTP (HTTPS) protocol to operate etcd by default.
In this scenario, etcd has a bug about HTTP/2: if etcd is operated over HTTPS (HTTP is not affected), the upper limit of HTTP/2 connections is the default `250` in Golang. Therefore, when the number of APISIX data plane nodes is large, once the number of connections between all APISIX nodes and etcd exceeds this upper limit, the response of APISIX API interface will be very slow.
In Golang, the default upper limit of HTTP/2 connections is `250`, the code is as follows:
```go
package http2
import ...
const (
prefaceTimeout = 10 * time.Second
firstSettingsTimeout = 2 * time.Second // should be in-flight with preface anyway
handlerChunkWriteSize = 4 << 10
defaultMaxStreams = 250 // TODO: make this 100 as the GFE seems to?
maxQueuedControlFrames = 10000
)
```
etcd officially maintains two main branches, `3.4` and `3.5`. In the `3.4` series, the recently released `3.4.20` version has fixed this issue. As for the `3.5` version, the official is preparing to release the `3.5.5` version a long time ago, but it has not been released as of now (2022.09.13). So, if you are using etcd version less than `3.5.5`, you can refer to the following ways to solve this problem:
1. Change the communication method between APISIX and etcd from HTTPS to HTTP.
2. Roll back the etcd to `3.4.20`.
3. Clone the etcd source code and compile the `release-3.5` branch directly (this branch has fixed the problem of HTTP/2 connections, but the new version has not been released yet).
The way to recompile etcd is as follows:
```shell
git checkout release-3.5
make GOOS=linux GOARCH=amd64
```
The compiled binary is in the bin directory, replace it with the etcd binary of your server environment, and then restart etcd:
For more information, please refer to:
- [when etcd node have many http long polling connections, it may cause etcd to respond slowly to http requests.](https://github.com/etcd-io/etcd/issues/14185)
- [bug: when apisix starts for a while, its communication with etcd starts to time out](https://github.com/apache/apisix/issues/7078)
- [the prometheus metrics API is tool slow](https://github.com/apache/apisix/issues/7353)
- [Support configuring `MaxConcurrentStreams` for http2](https://github.com/etcd-io/etcd/pull/14169)
Another solution is to switch to an experimental gRPC-based configuration synchronization. This requires setting `use_grpc: true` in the configuration file `conf/config.yaml`:
```yaml
etcd:
use_grpc: true
host:
- "http://127.0.0.1:2379"
prefix: "/apisix"
```
## Why is the file-logger logging garbled?
If you are using the `file-logger` plugin but getting garbled logs, one possible reason is your upstream response has returned a compressed response body. You can fix this by setting the accept-encoding in the request header to not receive compressed responses using the [proxy-rewirte](https://apisix.apache.org/docs/apisix/plugins/proxy-rewrite/) plugin:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 \
-H 'X-API-KEY: YOUR-TOKEN' -X PUT -d '
{
"methods":[
"GET"
],
"uri":"/test/index.html",
"plugins":{
"proxy-rewrite":{
"headers":{
"set":{
"accept-encoding":"gzip;q=0,deflate,sdch"
}
}
}
},
"upstream":{
"type":"roundrobin",
"nodes":{
"127.0.0.1:80":1
}
}
}'
```
## How does APISIX configure ETCD with authentication?
Suppose you have an ETCD cluster that enables the auth. To access this cluster, you need to configure the correct username and password for Apache APISIX in `conf/config.yaml`:
```yaml
deployment:
etcd:
host:
- "http://127.0.0.1:2379"
user: etcd_user # username for etcd
password: etcd_password # password for etcd
```
For other ETCD configurations, such as expiration times, retries, and so on, you can refer to the `etcd` section in the sample configuration `conf/config.yaml.example` file.
## What is the difference between SSLs, `tls.client_cert` in upstream configurations, and `ssl_trusted_certificate` in `config.yaml`?
The `ssls` is managed through the `/apisix/admin/ssls` API. It's used for managing TLS certificates. These certificates may be used during TLS handshake (between Apache APISIX and its clients). Apache APISIX uses Server Name Indication (SNI) to differentiate between certificates of different domains.
The `tls.client_cert`, `tls.client_key`, and `tls.client_cert_id` in upstream are used for mTLS communication with the upstream.
The `ssl_trusted_certificate` in `config.yaml` configures a trusted CA certificate. It is used for verifying some certificates signed by private authorities within APISIX, to avoid APISIX rejects the certificate. Note that it is not used to trust the certificates of APISIX upstream, because APISIX does not verify the legality of the upstream certificates. Therefore, even if the upstream uses an invalid TLS certificate, it can still be accessed without configuring a root certificate.
## Where can I find more answers?
You can find more answers on:
- [Apache APISIX Slack Channel](/docs/general/join/#join-the-slack-channel)
- [Ask questions on APISIX mailing list](/docs/general/join/#subscribe-to-the-mailing-list)
- [GitHub Issues](https://github.com/apache/apisix/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc) and [GitHub Discussions](https://github.com/apache/apisix/discussions)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,54 @@
---
title: APISIX variable
keywords:
- Apache APISIX
- API Gateway
- APISIX variable
description: This article describes the variables supported by Apache APISIX.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
Besides [NGINX variable](http://nginx.org/en/docs/varindex.html), APISIX also provides
additional variables.
## List of variables
| Variable Name | Origin | Description | Example |
|-------------------- | ---------- | ----------------------------------------------------------------------------------- | ------------- |
| balancer_ip | core | The IP of picked upstream server. | 192.168.1.2 |
| balancer_port | core | The port of picked upstream server. | 80 |
| consumer_name | core | Username of Consumer. | |
| consumer_group_id | core | Group ID of Consumer. | |
| graphql_name | core | The [operation name](https://graphql.org/learn/queries/#operation-name) of GraphQL. | HeroComparison |
| graphql_operation | core | The operation type of GraphQL. | mutation |
| graphql_root_fields | core | The top level fields of GraphQL. | ["hero"] |
| mqtt_client_id | mqtt-proxy | The client id in MQTT protocol. | |
| route_id | core | Id of Route. | |
| route_name | core | Name of Route. | |
| service_id | core | Id of Service. | |
| service_name | core | Name of Service. | |
| redis_cmd_line | Redis | The content of Redis command. | |
| resp_body | core | In the logger plugin, if some of the plugins support logging of response body, for example by configuring `include_resp_body: true`, then this variable can be used in the log format. | |
| rpc_time | xRPC | Time spent at the rpc request level. | |
You can also register your own [variable](./plugin-develop.md#register-custom-variable).

View File

@@ -0,0 +1,51 @@
---
title: Architecture
keywords:
- API Gateway
- Apache APISIX
- APISIX architecture
description: Architecture of Apache APISIX—the Cloud Native API Gateway.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
APISIX is built on top of Nginx and [ngx_lua](https://github.com/openresty/lua-nginx-module) leveraging the power offered by LuaJIT. See [Why Apache APISIX chose Nginx and Lua to build API Gateway?](https://apisix.apache.org/blog/2021/08/25/why-apache-apisix-chose-nginx-and-lua/).
![flow-software-architecture](https://raw.githubusercontent.com/apache/apisix/master/docs/assets/images/flow-software-architecture.png)
APISIX has two main parts:
1. APISIX core, Lua plugin, multi-language Plugin runtime, and the WASM plugin runtime.
2. Built-in Plugins that adds features for observability, security, traffic control, etc.
The APISIX core handles the important functions like matching Routes, load balancing, service discovery, configuration management, and provides a management API. It also includes APISIX Plugin runtime supporting Lua and multilingual Plugins (Go, Java , Python, JavaScript, etc) including the experimental WASM Plugin runtime.
APISIX also has a set of [built-in Plugins](https://apisix.apache.org/docs/apisix/plugins/batch-requests) that adds features like authentication, security, observability, etc. They are written in Lua.
## Request handling process
The diagram below shows how APISIX handles an incoming request and applies corresponding Plugins:
![flow-load-plugin](https://raw.githubusercontent.com/apache/apisix/master/docs/assets/images/flow-load-plugin.png)
## Plugin hierarchy
The chart below shows the order in which different types of Plugin are applied to a request:
![flow-plugin-internal](https://raw.githubusercontent.com/apache/apisix/master/docs/assets/images/flow-plugin-internal.png)

View File

@@ -0,0 +1,276 @@
---
title: Running APISIX in AWS with AWS CDK
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
[APISIX](https://github.com/apache/apisix) is a cloud-native microservices API gateway, delivering the ultimate performance, security, open source and scalable platform for all your APIs and microservices.
## Architecture
This reference architecture walks you through building **APISIX** as a serverless container API Gateway on top of AWS Fargate with AWS CDK.
![Apache APISIX Serverless Architecture](../../assets/images/aws-fargate-cdk.png)
## Generate an AWS CDK project with `projen`
```bash
$ mkdir apisix-aws
$ cd $_
$ npx projen new awscdk-app-ts
```
update the `.projenrc.js` with the following content:
```js
const { AwsCdkTypeScriptApp } = require('projen');
const project = new AwsCdkTypeScriptApp({
cdkVersion: "1.70.0",
name: "apisix-aws",
cdkDependencies: [
'@aws-cdk/aws-ec2',
'@aws-cdk/aws-ecs',
'@aws-cdk/aws-ecs-patterns',
]
});
project.synth();
```
update the project:
```ts
$ npx projen
```
## update `src/main.ts`
```ts
import * as cdk from '@aws-cdk/core';
import { Vpc, Port } from '@aws-cdk/aws-ec2';
import { Cluster, ContainerImage, TaskDefinition, Compatibility } from '@aws-cdk/aws-ecs';
import { ApplicationLoadBalancedFargateService, NetworkLoadBalancedFargateService } from '@aws-cdk/aws-ecs-patterns';
export class ApiSixStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const vpc = Vpc.fromLookup(this, 'VPC', {
isDefault: true
})
const cluster = new Cluster(this, 'Cluster', {
vpc
})
/**
* ApiSix service
*/
const taskDefinition = new TaskDefinition(this, 'TaskApiSix', {
compatibility: Compatibility.FARGATE,
memoryMiB: '512',
cpu: '256'
})
taskDefinition
.addContainer('apisix', {
image: ContainerImage.fromRegistry('iresty/apisix'),
})
.addPortMappings({
containerPort: 9080
})
taskDefinition
.addContainer('etcd', {
image: ContainerImage.fromRegistry('gcr.azk8s.cn/etcd-development/etcd:v3.3.12'),
// image: ContainerImage.fromRegistry('gcr.io/etcd-development/etcd:v3.3.12'),
})
.addPortMappings({
containerPort: 2379
})
const svc = new ApplicationLoadBalancedFargateService(this, 'ApiSixService', {
cluster,
taskDefinition,
})
svc.targetGroup.setAttribute('deregistration_delay.timeout_seconds', '30')
svc.targetGroup.configureHealthCheck({
interval: cdk.Duration.seconds(5),
healthyHttpCodes: '404',
healthyThresholdCount: 2,
unhealthyThresholdCount: 3,
timeout: cdk.Duration.seconds(4)
})
/**
* PHP service
*/
const taskDefinitionPHP = new TaskDefinition(this, 'TaskPHP', {
compatibility: Compatibility.FARGATE,
memoryMiB: '512',
cpu: '256'
})
taskDefinitionPHP
.addContainer('php', {
image: ContainerImage.fromRegistry('abiosoft/caddy:php'),
})
.addPortMappings({
containerPort: 2015
})
const svcPHP = new NetworkLoadBalancedFargateService(this, 'PhpService', {
cluster,
taskDefinition: taskDefinitionPHP,
assignPublicIp: true,
})
// allow Fargate task behind NLB to accept all traffic
svcPHP.service.connections.allowFromAnyIpv4(Port.tcp(2015))
svcPHP.targetGroup.setAttribute('deregistration_delay.timeout_seconds', '30')
svcPHP.loadBalancer.setAttribute('load_balancing.cross_zone.enabled', 'true')
new cdk.CfnOutput(this, 'ApiSixDashboardURL', {
value: `http://${svc.loadBalancer.loadBalancerDnsName}/apisix/dashboard/`
})
}
}
const devEnv = {
account: process.env.CDK_DEFAULT_ACCOUNT,
region: process.env.CDK_DEFAULT_REGION,
};
const app = new cdk.App();
new ApiSixStack(app, 'apisix-stack-dev', { env: devEnv });
app.synth();
```
## Deploy the APISIX Stack with AWS CDK
```bash
$ cdk diff
$ cdk deploy
```
On deployment complete, some outputs will be returned:
```bash
Outputs:
apiSix.PhpServiceLoadBalancerDNS5E5BAB1B = apiSi-PhpSe-FOL2MM4TW7G8-09029e095ab36fcc.elb.us-west-2.amazonaws.com
apiSix.ApiSixDashboardURL = http://apiSi-ApiSi-1TM103DN35GRY-1477666967.us-west-2.elb.amazonaws.com/apisix/dashboard/
apiSix.ApiSixServiceLoadBalancerDNSD4E5B8CB = apiSi-ApiSi-1TM103DN35GRY-1477666967.us-west-2.elb.amazonaws.com
apiSix.ApiSixServiceServiceURLF6EC7872 = http://apiSi-ApiSi-1TM103DN35GRY-1477666967.us-west-2.elb.amazonaws.com
```
Open the `apiSix.ApiSixDashboardURL` from your browser and you will see the login prompt.
### Configure the upstream nodes
All upstream nodes are running as **AWS Fargate** tasks and registered to the **NLB(Network Load Balancer)** exposing multiple static IP addresses. We can query the IP addresses by **nslookup** the **apiSix.PhpServiceLoadBalancerDNS5E5BAB1B** like this:
```bash
$ nslookup apiSi-PhpSe-FOL2MM4TW7G8-09029e095ab36fcc.elb.us-west-2.amazonaws.com
Server: 192.168.31.1
Address: 192.168.31.1#53
Non-authoritative answer:
Name: apiSi-PhpSe-FOL2MM4TW7G8-09029e095ab36fcc.elb.us-west-2.amazonaws.com
Address: 44.224.124.213
Name: apiSi-PhpSe-FOL2MM4TW7G8-09029e095ab36fcc.elb.us-west-2.amazonaws.com
Address: 18.236.43.167
Name: apiSi-PhpSe-FOL2MM4TW7G8-09029e095ab36fcc.elb.us-west-2.amazonaws.com
Address: 35.164.164.178
Name: apiSi-PhpSe-FOL2MM4TW7G8-09029e095ab36fcc.elb.us-west-2.amazonaws.com
Address: 44.226.102.63
```
Configure the IP addresses returned as your upstream nodes in your **APISIX** dashboard followed by the **Services** and **Routes** configuration. Let's say we have a `/index.php` as the URI for the first route for our first **Service** from the **Upstream** IP addresses.
![upstream with AWS NLB IP addresses](../../assets/images/aws-nlb-ip-addr.png)
![service with created upstream](../../assets/images/aws-define-service.png)
![define route with service and uri](../../assets/images/aws-define-route.png)
## Validation
OK. Let's test the `/index.php` on `{apiSix.ApiSixServiceServiceURL}/index.php`
![Testing Apache APISIX on AWS Fargate](../../assets/images/aws-caddy-php-welcome-page.png)
Now we have been successfully running **APISIX** in AWS Fargate as serverless container API Gateway service.
## Clean up
```bash
$ cdk destroy
```
## Running APISIX in AWS China Regions
update `src/main.ts`
```js
taskDefinition
.addContainer('etcd', {
image: ContainerImage.fromRegistry('gcr.azk8s.cn/etcd-development/etcd:v3.3.12'),
// image: ContainerImage.fromRegistry('gcr.io/etcd-development/etcd:v3.3.12'),
})
.addPortMappings({
containerPort: 2379
})
```
_(read [here](https://github.com/iresty/docker-apisix/blob/9a731f698171f4838e9bc0f1c05d6dda130ca89b/example/docker-compose.yml#L18-L19) for more reference)_
Run `cdk deploy` and specify your preferred AWS region in China.
```bash
# let's say we have another AWS_PROFILE for China regions called 'cn'
# make sure you have aws configure --profile=cn properly.
#
# deploy to NingXia region
$ cdk deploy --profile cn -c region=cn-northwest-1
# deploy to Beijing region
$ cdk deploy --profile cn -c region=cn-north-1
```
In the following case, we got the `Outputs` returned for **AWS Ningxia region(cn-northwest-1)**:
```bash
Outputs:
apiSix.PhpServiceLoadBalancerDNS5E5BAB1B = apiSi-PhpSe-1760FFS3K7TXH-562fa1f7f642ec24.elb.cn-northwest-1.amazonaws.com.cn
apiSix.ApiSixDashboardURL = http://apiSi-ApiSi-123HOROQKWZKA-1268325233.cn-northwest-1.elb.amazonaws.com.cn/apisix/dashboard/
apiSix.ApiSixServiceLoadBalancerDNSD4E5B8CB = apiSi-ApiSi-123HOROQKWZKA-1268325233.cn-northwest-1.elb.amazonaws.com.cn
apiSix.ApiSixServiceServiceURLF6EC7872 = http://apiSi-ApiSi-123HOROQKWZKA-1268325233.cn-northwest-1.elb.amazonaws.com.cn
```
Open the `apiSix.ApiSixDashboardURL` URL and log in to configure your **APISIX** in AWS China region.
_TBD_
## Decouple APISIX and etcd3 on AWS
For high availability and state consistency consideration, you might be interested to decouple the **etcd3** as a separate cluster from **APISIX** not only for performance but also high availability and fault tolerance yet with highly reliable state consistency.
_TBD_

View File

@@ -0,0 +1,149 @@
---
title: Batch Processor
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
The batch processor can be used to aggregate entries(logs/any data) and process them in a batch.
When the batch_max_size is set to 1 the processor will execute each entry immediately. Setting the batch max size more
than 1 will start aggregating the entries until it reaches the max size or the timeout expires.
## Configurations
The only mandatory parameter to create a batch processor is a function. The function will be executed when the batch reaches the max size
or when the buffer duration exceeds.
| Name | Type | Requirement | Default | Valid | Description |
| ---------------- | ------- | ----------- | ------- | ------- | ------------------------------------------------------------ |
| name | string | optional | logger's name | ["http logger",...] | A unique identifier used to identify the batch processor, which defaults to the name of the logger plug-in that calls the batch processor, such as plug-in "http logger" 's `name` is "http logger. |
| batch_max_size | integer | optional | 1000 | [1,...] | Sets the maximum number of logs sent in each batch. When the number of logs reaches the set maximum, all logs will be automatically pushed to the HTTP/HTTPS service. |
| inactive_timeout | integer | optional | 5 | [1,...] | The maximum time to refresh the buffer (in seconds). When the maximum refresh time is reached, all logs will be automatically pushed to the HTTP/HTTPS service regardless of whether the number of logs in the buffer reaches the maximum number set. |
| buffer_duration | integer | optional | 60 | [1,...] | Maximum age in seconds of the oldest entry in a batch before the batch must be processed. |
| max_retry_count | integer | optional | 0 | [0,...] | Maximum number of retries before removing the entry from the processing pipeline when an error occurs. |
| retry_delay | integer | optional | 1 | [0,...] | Number of seconds the process execution should be delayed if the execution fails. |
The following code shows an example of how to use batch processor in your plugin:
```lua
local bp_manager_mod = require("apisix.utils.batch-processor-manager")
...
local plugin_name = "xxx-logger"
local batch_processor_manager = bp_manager_mod.new(plugin_name)
local schema = {...}
local _M = {
...
name = plugin_name,
schema = batch_processor_manager:wrap_schema(schema),
}
...
function _M.log(conf, ctx)
local entry = {...} -- data to log
if batch_processor_manager:add_entry(conf, entry) then
return
end
-- create a new processor if not found
-- entries is an array table of entry, which can be processed in batch
local func = function(entries)
-- serialize to json array core.json.encode(entries)
-- process/send data
return true
-- return false, err_msg, first_fail if failed
-- first_fail(optional) indicates first_fail-1 entries have been successfully processed
-- and during processing of entries[first_fail], the error occurred. So the batch processor
-- only retries for the entries having index >= first_fail as per the retry policy.
end
batch_processor_manager:add_entry_to_new_processor(conf, entry, ctx, func)
end
```
The batch processor's configuration will be set inside the plugin's configuration.
For example:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"http-logger": {
"uri": "http://mockbin.org/bin/:ID",
"batch_max_size": 10,
"max_retry_count": 1
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"uri": "/hello"
}'
```
If your plugin only uses one global batch processor,
you can also use the processor directly:
```lua
local entry = {...} -- data to log
if log_buffer then
log_buffer:push(entry)
return
end
local config_bat = {
name = config.name,
retry_delay = config.retry_delay,
...
}
local err
-- entries is an array table of entry, which can be processed in batch
local func = function(entries)
...
return true
-- return false, err_msg, first_fail if failed
end
log_buffer, err = batch_processor:new(func, config_bat)
if not log_buffer then
core.log.warn("error when creating the batch processor: ", err)
return
end
log_buffer:push(entry)
```
Note: Please make sure the batch max size (entry count) is within the limits of the function execution.
The timer to flush the batch runs based on the `inactive_timeout` configuration. Thus, for optimal usage,
keep the `inactive_timeout` smaller than the `buffer_duration`.

View File

@@ -0,0 +1,151 @@
---
title: Benchmark
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
### Benchmark Environments
n1-highcpu-8 (8 vCPUs, 7.2 GB memory) on Google Cloud
But we **only** used 4 cores to run APISIX, and left 4 cores for system and [wrk](https://github.com/wg/wrk),
which is the HTTP benchmarking tool.
### Benchmark Test for reverse proxy
Only used APISIX as the reverse proxy server, with no logging, limit rate, or other plugins enabled,
and the response size was 1KB.
#### QPS
The x-axis means the size of CPU core, and the y-axis is QPS.
![benchmark-1](../../assets/images/benchmark-1.jpg)
#### Latency
Note the y-axis latency in **microsecond(μs)** not millisecond.
![latency-1](../../assets/images/latency-1.jpg)
#### Flame Graph
The result of Flame Graph:
![flamegraph-1](../../assets/images/flamegraph-1.jpg)
And if you want to run the benchmark test in your machine, you should run another Nginx to listen 80 port.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:80": 1,
"127.0.0.2:80": 1
}
}
}'
```
then run wrk:
```shell
wrk -d 60 --latency http://127.0.0.1:9080/hello
```
### Benchmark Test for reverse proxy, enabled 2 plugins
Only used APISIX as the reverse proxy server, enabled the limit rate and prometheus plugins,
and the response size was 1KB.
#### QPS
The x-axis means the size of CPU core, and the y-axis is QPS.
![benchmark-2](../../assets/images/benchmark-2.jpg)
#### Latency
Note the y-axis latency in **microsecond(μs)** not millisecond.
![latency-2](../../assets/images/latency-2.jpg)
#### Flame Graph
The result of Flame Graph:
![flamegraph-2](../../assets/images/flamegraph-2.jpg)
And if you want to run the benchmark test in your machine, you should run another Nginx to listen 80 port.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {
"limit-count": {
"count": 999999999,
"time_window": 60,
"rejected_code": 503,
"key": "remote_addr"
},
"prometheus":{}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:80": 1,
"127.0.0.2:80": 1
}
}
}'
```
then run wrk:
```shell
wrk -d 60 --latency http://127.0.0.1:9080/hello
```
For more reference on how to run the benchmark test, you can see this [PR](https://github.com/apache/apisix/pull/6136) and this [script](https://gist.github.com/membphis/137db97a4bf64d3653aa42f3e016bd01).
:::tip
If you want to run the benchmark with a large number of connections, You may have to update the [**keepalive**](https://github.com/apache/apisix/blob/master/conf/config.yaml.example#L241) config by adding the configuration to [`config.yaml`](https://github.com/apache/apisix/blob/master/conf/config.yaml) and reload APISIX. Connections exceeding this number will become short connections. You can run the following command to test the benchmark with a large number of connections:
```bash
wrk -t200 -c5000 -d30s http://127.0.0.1:9080/hello
```
For more details, you can refer to [Module ngx_http_upstream_module](http://nginx.org/en/docs/http/ngx_http_upstream_module.html).
:::

View File

@@ -0,0 +1,119 @@
---
id: build-apisix-dev-environment-devcontainers
title: Build development environment with Dev Containers
description: This paper introduces how to quickly start the APISIX API Gateway development environment using Dev Containers.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
Previously, building and developing APISIX on Linux or macOS required developers to install its runtime environment and toolchain themselves, and developers might not be familiar with them.
As it needs to support multiple operating systems and CPU ISAs, the process has inherent complexities in how to find and install dependencies and toolchains.
:::note
The tutorial can be used as an alternative to a [bare-metal environment](building-apisix.md) or a [macOS container development environment](build-apisix-dev-environment-on-mac.md).
It only requires that you have an environment running Docker or a similar alternative (the docker/docker compose command is required), and no other dependent components need to be installed on your host machine.
:::
## Supported systems and CPU ISA
- Linux
- AMD64
- ARM64
- Windows (with WSL2 supported)
- AMD64
- macOS
- ARM64
- AMD64
## Quick Setup of Apache APISIX Development Environment
### Implementation Idea
We use Dev Containers to build development environment, and when we open an APISIX project using the IDE, we have access to the container-driven runtime environment.
There the etcd is ready and we can start APISIX directly.
### Steps
:::note
The following uses Visual Studio Code, which has built-in integration with Dev Containers.
In theory you could also use any other editor or IDE that integrates with Dev Containers.
:::
First, clone the APISIX source code, open project in Visual Studio Code.
```shell
git clone https://github.com/apache/apisix.git
cd apisix
code . # VSCode needs to be in the PATH environment variable, you can also open the project directory manually in the UI.
```
Next, switch to Dev Containers. Open the VSCode Command Palette, and execute `Dev Containers: Reopen in Container`.
![VSCode Command open in container](../../assets/images/build-devcontainers-vscode-command.png)
VSCode will open the Dev Containers project in a new window, where it will build the runtime and install the toolchain according to the Dockerfile before starting the connection and finally installing the APISIX dependencies.
:::note
This process requires a reliable network connection, and it will access Docker Hub, GitHub, and some other sites. You will need to ensure the network connection yourself, otherwise the container build may fail.
:::
Wait some minutes, depending on the internet connection or computer performance, it may take from a few minutes to tens of minutes, you can click on the Progress Bar in the bottom right corner to view a live log where you will be able to check unusual stuck.
If you encounter any problems, you can search or ask questions in [GitHub Issues](https://github.com/apache/apisix/issues) or [GitHub Discussions](https://github.com/apache/apisix/discussions), and community members will respond as promptly as possible.
![VSCode dev containers building progress bar](../../assets/images/build-devcontainers-vscode-progressbar.png)
When the process in the terminal is complete, the development environment is ready, and even etcd is ready.
Start APISIX with the following command:
```shell
make run
```
Now you can start writing code and test cases, and testing tools are available:
```shell
export TEST_NGINX_BINARY=openresty
# run all tests
make test
# or run a specify test case file
FLUSH_ETCD=1 prove -Itest-nginx/lib -I. -r t/admin/api.t
```
## FAQ
### Where's the code? When I delete the container, are the changes lost?
It will be on your host, which is where you cloned the APISIX source code, and the container uses the volume to mount the code into the container. Containers contain only the runtime environment, not the source code, so no changes will be lost whether you close or delete the container.
And, the `git` is already installed in the container, so you can commit a change directly there.

View File

@@ -0,0 +1,94 @@
---
id: build-apisix-dev-environment-on-mac
title: Build development environment on Mac
description: This paper introduces how to use Docker to quickly build the development environment of API gateway Apache APISIX on Mac.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
If you want to quickly build and develop APISIX on your Mac platform, you can refer to this tutorial.
:::note
This tutorial is suitable for situations where you need to quickly start development on the Mac platform, if you want to go further and have a better development experience, the better choice is the Linux-based virtual machine, or directly use this kind of system as your development environment.
You can see the specific supported systems [here](install-dependencies.md#install).
:::
## Quick Setup of Apache APISIX Development Environment
### Implementation Idea
We use Docker to build the test environment of Apache APISIX. When the container starts, we can mount the source code of Apache APISIX into the container, and then we can build and run test cases in the container.
### Implementation Steps
First, clone the APISIX source code, build an image that can run test cases, and compile the Apache APISIX.
```shell
git clone https://github.com/apache/apisix.git
cd apisix
docker build -t apisix-dev-env -f example/build-dev-image.dockerfile .
```
Next, start Etcd:
```shell
docker run -d --name etcd-apisix --net=host pachyderm/etcd:v3.5.2
```
Mount the APISIX directory and start the development environment container:
```shell
docker run -d --name apisix-dev-env --net=host -v $(pwd):/apisix:rw apisix-dev-env:latest
```
Finally, enter the container, build the Apache APISIX runtime, and configure the test environment:
```shell
docker exec -it apisix-dev-env make deps
docker exec -it apisix-dev-env ln -s /usr/bin/openresty /usr/bin/nginx
```
### Run and Stop APISIX
```shell
docker exec -it apisix-dev-env make run
docker exec -it apisix-dev-env make stop
```
:::note
If you encounter an error message like `nginx: [emerg] bind() to unix:/apisix/logs/worker_events.sock failed (95: Operation not supported)` while running `make run`, please use this solution.
Change the `File Sharing` settings of your Docker-Desktop:
![Docker-Desktop File Sharing Setting](../../assets/images/update-docker-desktop-file-sharing.png)
Changing to either `gRPC FUSE` or `osxfs` can resolve this issue.
:::
### Run Specific Test Cases
```shell
docker exec -it apisix-dev-env prove t/admin/routes.t
```

View File

@@ -0,0 +1,267 @@
---
id: building-apisix
title: Building APISIX from source
keywords:
- API Gateway
- Apache APISIX
- Code Contribution
- Building APISIX
description: Guide for building and running APISIX locally for development.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
If you are looking to setup a development environment or contribute to APISIX, this guide is for you.
If you are looking to quickly get started with APISIX, check out the other [installation methods](./installation-guide.md).
:::note
To build an APISIX docker image from source code, see [build image from source code](https://apisix.apache.org/docs/docker/build/#build-an-image-from-customizedpatched-source-code).
To build and package APISIX for a specific platform, see [apisix-build-tools](https://github.com/api7/apisix-build-tools) instead.
:::
## Building APISIX from source
First of all, we need to specify the branch to be built:
```shell
APISIX_BRANCH='release/3.13'
```
Then, you can run the following command to clone the APISIX source code from Github:
```shell
git clone --depth 1 --branch ${APISIX_BRANCH} https://github.com/apache/apisix.git apisix-${APISIX_BRANCH}
```
Alternatively, you can also download the source package from the [Downloads](https://apisix.apache.org/downloads/) page. Note that source packages here are not distributed with test cases.
Before installation, install [OpenResty](https://openresty.org/en/installation.html).
Next, navigate to the directory, install dependencies, and build APISIX.
```shell
cd apisix-${APISIX_BRANCH}
make deps
make install
```
This will install the runtime-dependent Lua libraries and `apisix-runtime` the `apisix` CLI tool.
:::note
If you get an error message like `Could not find header file for LDAP/PCRE/openssl` while running `make deps`, use this solution.
`luarocks` supports custom compile-time dependencies (See: [Config file format](https://github.com/luarocks/luarocks/wiki/Config-file-format)). You can use a third-party tool to install the missing packages and add its installation directory to the `luarocks`' variables table. This method works on macOS, Ubuntu, CentOS, and other similar operating systems.
The solution below is for macOS but it works similarly for other operating systems:
1. Install `openldap` by running:
```shell
brew install openldap
```
2. Locate the installation directory by running:
```shell
brew --prefix openldap
```
3. Add this path to the project configuration file by any of the two methods shown below:
1. You can use the `luarocks config` command to set `LDAP_DIR`:
```shell
luarocks config variables.LDAP_DIR /opt/homebrew/cellar/openldap/2.6.1
```
2. You can also change the default configuration file of `luarocks`. Open the file `~/.luaorcks/config-5.1.lua` and add the following:
```shell
variables = { LDAP_DIR = "/opt/homebrew/cellar/openldap/2.6.1", LDAP_INCDIR = "/opt/homebrew/cellar/openldap/2.6.1/include", }
```
`/opt/homebrew/cellar/openldap/` is default path `openldap` is installed on Apple Silicon macOS machines. For Intel machines, the default path is `/usr/local/opt/openldap/`.
:::
To uninstall the APISIX runtime, run:
```shell
make uninstall
make undeps
```
:::danger
This operation will remove the files completely.
:::
## Installing etcd
APISIX uses [etcd](https://github.com/etcd-io/etcd) to save and synchronize configuration. Before running APISIX, you need to install etcd on your machine. Installation methods based on your operating system are mentioned below.
<Tabs
groupId="os"
defaultValue="linux"
values={[
{label: 'Linux', value: 'linux'},
{label: 'macOS', value: 'mac'},
]}>
<TabItem value="linux">
```shell
ETCD_VERSION='3.4.18'
wget https://github.com/etcd-io/etcd/releases/download/v${ETCD_VERSION}/etcd-v${ETCD_VERSION}-linux-amd64.tar.gz
tar -xvf etcd-v${ETCD_VERSION}-linux-amd64.tar.gz && \
cd etcd-v${ETCD_VERSION}-linux-amd64 && \
sudo cp -a etcd etcdctl /usr/bin/
nohup etcd >/tmp/etcd.log 2>&1 &
```
</TabItem>
<TabItem value="mac">
```shell
brew install etcd
brew services start etcd
```
</TabItem>
</Tabs>
## Running and managing APISIX server
To initialize the configuration file, within the APISIX directory, run:
```shell
apisix init
```
:::tip
You can run `apisix help` to see a list of available commands.
:::
You can then test the created configuration file by running:
```shell
apisix test
```
Finally, you can run the command below to start APISIX:
```shell
apisix start
```
To stop APISIX, you can use either the `quit` or the `stop` subcommand.
`apisix quit` will gracefully shutdown APISIX. It will ensure that all received requests are completed before stopping.
```shell
apisix quit
```
Where as, the `apisix stop` command does a force shutdown and discards all pending requests.
```shell
apisix stop
```
## Building runtime for APISIX
Some features of APISIX requires additional Nginx modules to be introduced into OpenResty.
To use these features, you need to build a custom distribution of OpenResty (apisix-runtime). See [apisix-build-tools](https://github.com/api7/apisix-build-tools) for setting up your build environment and building it.
## Running tests
The steps below show how to run the test cases for APISIX:
1. Install [cpanminus](https://metacpan.org/pod/App::cpanminus#INSTALLATION), the package manager for Perl.
2. Install the [test-nginx](https://github.com/openresty/test-nginx) dependencies with `cpanm`:
```shell
sudo cpanm --notest Test::Nginx IPC::Run > build.log 2>&1 || (cat build.log && exit 1)
```
3. Clone the test-nginx source code locally:
```shell
git clone https://github.com/openresty/test-nginx.git
```
4. Append the current directory to Perl's module directory by running:
```shell
export PERL5LIB=.:$PERL5LIB
```
You can specify the Nginx binary path by running:
```shell
TEST_NGINX_BINARY=/usr/local/bin/openresty prove -Itest-nginx/lib -r t
```
5. Run the tests by running:
```shell
make test
```
:::note
Some tests rely on external services and system configuration modification. See [ci/linux_openresty_common_runner.sh](https://github.com/apache/apisix/blob/master/ci/linux_openresty_common_runner.sh) for a complete test environment build.
:::
### Troubleshooting
These are some common troubleshooting steps for running APISIX test cases.
#### Configuring Nginx path
For the error `Error unknown directive "lua_package_path" in /API_ASPIX/apisix/t/servroot/conf/nginx.conf`, ensure that OpenResty is set to the default Nginx and export the path as follows:
- Linux default installation path:
```shell
export PATH=/usr/local/openresty/nginx/sbin:$PATH
```
#### Running a specific test case
To run a specific test case, use the command below:
```shell
prove -Itest-nginx/lib -r t/plugin/openid-connect.t
```
See [testing framework](./internal/testing-framework.md) for more details.

View File

@@ -0,0 +1,328 @@
---
title: Certificate
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
`APISIX` supports to load multiple SSL certificates by TLS extension Server Name Indication (SNI).
### Single SNI
It is most common for an SSL certificate to contain only one domain. We can create an `ssl` object. Here is a simple case, creates a `ssl` object and `route` object.
* `cert`: PEM-encoded public certificate of the SSL key pair.
* `key`: PEM-encoded private key of the SSL key pair.
* `snis`: Hostname(s) to associate with this certificate as SNIs. To set this attribute this certificate must have a valid private key associated with it.
The following is an example of configuring an SSL certificate with a single SNI in APISIX.
Create an SSL object with the certificate and key valid for the SNI:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/ssls/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"cert" : "'"$(cat t/certs/apisix.crt)"'",
"key": "'"$(cat t/certs/apisix.key)"'",
"snis": ["test.com"]
}'
```
Create a Router object:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -i -d '
{
"uri": "/get",
"hosts": ["test.com"],
"methods": ["GET"],
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
```
Send a request to verify:
```shell
curl --resolve 'test.com:9443:127.0.0.1' https://test.com:9443/get -k -vvv
* Added test.com:9443:127.0.0.1 to DNS cache
* About to connect() to test.com port 9443 (#0)
* Trying 127.0.0.1...
* Connected to test.com (127.0.0.1) port 9443 (#0)
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: C=CN; ST=GuangDong; L=ZhuHai; O=iresty; CN=test.com
* start date: Jun 24 22:18:05 2019 GMT
* expire date: May 31 22:18:05 2119 GMT
* issuer: C=CN; ST=GuangDong; L=ZhuHai; O=iresty; CN=test.com
* SSL certificate verify result: self-signed certificate (18), continuing anyway.
> GET /get HTTP/2
> Host: test.com:9443
> user-agent: curl/7.81.0
> accept: */*
```
### wildcard SNI
An SSL certificate could also be valid for a wildcard domain like `*.test.com`, which means it is valid for any domain of that pattern, including `www.test.com` and `mail.test.com`.
The following is an example of configuring an SSL certificate with a wildcard SNI in APISIX.
Create an SSL object with the certificate and key valid for the SNI:
```shell
curl http://127.0.0.1:9180/apisix/admin/ssls/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"cert" : "'"$(cat t/certs/apisix.crt)"'",
"key": "'"$(cat t/certs/apisix.key)"'",
"snis": ["*.test.com"]
}'
```
Create a Router object:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -i -d '
{
"uri": "/get",
"hosts": ["*.test.com"],
"methods": ["GET"],
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
```
Send a request to verify:
```shell
curl --resolve 'www.test.com:9443:127.0.0.1' https://www.test.com:9443/get -k -vvv
* Added www.test.com:9443:127.0.0.1 to DNS cache
* Hostname www.test.com was found in DNS cache
* Trying 127.0.0.1:9443...
* Connected to www.test.com (127.0.0.1) port 9443 (#0)
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: C=CN; ST=GuangDong; L=ZhuHai; O=iresty; CN=test.com
* start date: Jun 24 22:18:05 2019 GMT
* expire date: May 31 22:18:05 2119 GMT
* issuer: C=CN; ST=GuangDong; L=ZhuHai; O=iresty; CN=test.com
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /get HTTP/2
> Host: www.test.com:9443
> user-agent: curl/7.74.0
> accept: */*
```
### multiple domain
If your SSL certificate may contain more than one domain, like `www.test.com` and `mail.test.com`, then you can add them into the `snis` array. For example:
```json
{
"snis": ["www.test.com", "mail.test.com"]
}
```
### multiple certificates for a single domain
If you want to configure multiple certificate for a single domain, for
instance, supporting both the
[ECC](https://en.wikipedia.org/wiki/Elliptic-curve_cryptography)
and RSA key-exchange algorithm, then just configure the extra certificates (the
first certificate and private key should be still put in `cert` and `key`) and
private keys by `certs` and `keys`.
* `certs`: PEM-encoded certificate array.
* `keys`: PEM-encoded private key array.
`APISIX` will pair certificate and private key with the same indice as a SSL key
pair. So the length of `certs` and `keys` must be same.
### set up multiple CA certificates
APISIX currently uses CA certificates in several places, such as [Protect Admin API](./mtls.md#protect-admin-api), [etcd with mTLS](./mtls.md#etcd-with-mtls), and [Deployment Modes](./deployment-modes.md).
In these places, `ssl_trusted_certificate` or `trusted_ca_cert` will be used to set up the CA certificate, but these configurations will eventually be translated into [lua_ssl_trusted_certificate](https://github.com/openresty/lua-nginx-module#lua_ssl_trusted_certificate) directive in OpenResty.
If you need to set up different CA certificates in different places, then you can package these CA certificates into a CA bundle file and point to this file when you need to set up CAs. This will avoid the problem that the generated `lua_ssl_trusted_certificate` has multiple locations and overwrites each other.
The following is a complete example to show how to set up multiple CA certificates in APISIX.
Suppose we let client and APISIX Admin API, APISIX and ETCD communicate with each other using mTLS protocol, and currently there are two CA certificates, `foo_ca.crt` and `bar_ca.crt`, and use each of these two CA certificates to issue client and server certificate pairs, `foo_ca.crt` and its issued certificate pair are used to protect Admin API, and `bar_ca.crt` and its issued certificate pair are used to protect ETCD.
The following table details the configurations involved in this example and what they do:
| Configuration | Type | Description |
| ------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| foo_ca.crt | CA cert | Issues the secondary certificate required for the client to communicate with the APISIX Admin API over mTLS. |
| foo_client.crt | cert | A certificate issued by `foo_ca.crt` and used by the client to prove its identity when accessing the APISIX Admin API. |
| foo_client.key | key | Issued by `foo_ca.crt`, used by the client, the key file required to access the APISIX Admin API. |
| foo_server.crt | cert | Issued by `foo_ca.crt`, used by APISIX, corresponding to the `admin_api_mtls.admin_ssl_cert` configuration entry. |
| foo_server.key | key | Issued by `foo_ca.crt`, used by APISIX, corresponding to the `admin_api_mtls.admin_ssl_cert_key` configuration entry. |
| admin.apisix.dev | doname | Common Name used in issuing `foo_server.crt` certificate, through which the client accesses APISIX Admin API |
| bar_ca.crt | CA cert | Issues the secondary certificate required for APISIX to communicate with ETCD over mTLS. |
| bar_etcd.crt | cert | Issued by `bar_ca.crt` and used by ETCD, corresponding to the `-cert-file` option in the ETCD startup command. |
| bar_etcd.key | key | Issued by `bar_ca.crt` and used by ETCD, corresponding to the `--key-file` option in the ETCD startup command. |
| bar_apisix.crt | cert | Issued by `bar_ca.crt`, used by APISIX, corresponding to the `etcd.tls.cert` configuration entry. |
| bar_apisix.key | key | Issued by `bar_ca.crt`, used by APISIX, corresponding to the `etcd.tls.key` configuration entry. |
| etcd.cluster.dev | key | Common Name used in issuing `bar_etcd.crt` certificate, which is used as SNI when APISIX communicates with ETCD over mTLS. corresponds to `etcd.tls.sni` configuration item. |
| apisix.ca-bundle | CA bundle | Merged from `foo_ca.crt` and `bar_ca.crt`, replacing `foo_ca.crt` and `bar_ca.crt`. |
1. Create CA bundle files
```shell
cat /path/to/foo_ca.crt /path/to/bar_ca.crt > apisix.ca-bundle
```
2. Start the ETCD cluster and enable client authentication
Start by writing a `goreman` configuration named `Procfile-single-enable-mtls`, the content as:
```text
# Use goreman to run `go get github.com/mattn/goreman`
etcd1: etcd --name infra1 --listen-client-urls https://127.0.0.1:12379 --advertise-client-urls https://127.0.0.1:12379 --listen-peer-urls http://127.0.0.1:12380 --initial-advertise-peer-urls http://127.0.0.1:12380 --initial-cluster-token etcd-cluster-1 --initial-cluster 'infra1=http://127.0.0.1:12380,infra2=http://127.0.0.1:22380,infra3=http://127.0.0.1:32380' --initial-cluster-state new --cert-file /path/to/bar_etcd.crt --key-file /path/to/bar_etcd.key --client-cert-auth --trusted-ca-file /path/to/apisix.ca-bundle
etcd2: etcd --name infra2 --listen-client-urls https://127.0.0.1:22379 --advertise-client-urls https://127.0.0.1:22379 --listen-peer-urls http://127.0.0.1:22380 --initial-advertise-peer-urls http://127.0.0.1:22380 --initial-cluster-token etcd-cluster-1 --initial-cluster 'infra1=http://127.0.0.1:12380,infra2=http://127.0.0.1:22380,infra3=http://127.0.0.1:32380' --initial-cluster-state new --cert-file /path/to/bar_etcd.crt --key-file /path/to/bar_etcd.key --client-cert-auth --trusted-ca-file /path/to/apisix.ca-bundle
etcd3: etcd --name infra3 --listen-client-urls https://127.0.0.1:32379 --advertise-client-urls https://127.0.0.1:32379 --listen-peer-urls http://127.0.0.1:32380 --initial-advertise-peer-urls http://127.0.0.1:32380 --initial-cluster-token etcd-cluster-1 --initial-cluster 'infra1=http://127.0.0.1:12380,infra2=http://127.0.0.1:22380,infra3=http://127.0.0.1:32380' --initial-cluster-state new --cert-file /path/to/bar_etcd.crt --key-file /path/to/bar_etcd.key --client-cert-auth --trusted-ca-file /path/to/apisix.ca-bundle
```
Use `goreman` to start the ETCD cluster:
```shell
goreman -f Procfile-single-enable-mtls start > goreman.log 2>&1 &
```
3. Update `config.yaml`
```yaml title="conf/config.yaml"
deployment:
admin:
admin_key
- name: admin
key: edd1c9f034335f136f87ad84b625c8f1
role: admin
admin_listen:
ip: 127.0.0.1
port: 9180
https_admin: true
admin_api_mtls:
admin_ssl_ca_cert: /path/to/apisix.ca-bundle
admin_ssl_cert: /path/to/foo_server.crt
admin_ssl_cert_key: /path/to/foo_server.key
apisix:
ssl:
ssl_trusted_certificate: /path/to/apisix.ca-bundle
deployment:
role: traditional
role_traditional:
config_provider: etcd
etcd:
host:
- "https://127.0.0.1:12379"
- "https://127.0.0.1:22379"
- "https://127.0.0.1:32379"
tls:
cert: /path/to/bar_apisix.crt
key: /path/to/bar_apisix.key
sni: etcd.cluster.dev
```
4. Test APISIX Admin API
Start APISIX, if APISIX starts successfully and there is no abnormal output in `logs/error.log`, it means that mTLS communication between APISIX and ETCD is normal.
Use curl to simulate a client, communicate with APISIX Admin API with mTLS, and create a route:
```shell
curl -vvv \
--resolve 'admin.apisix.dev:9180:127.0.0.1' https://admin.apisix.dev:9180/apisix/admin/routes/1 \
--cert /path/to/foo_client.crt \
--key /path/to/foo_client.key \
--cacert /path/to/apisix.ca-bundle \
-H "X-API-KEY: $admin_key" -X PUT -i -d '
{
"uri": "/get",
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
A successful mTLS communication between curl and the APISIX Admin API is indicated if the following SSL handshake process is output:
```shell
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS handshake, CERT verify (15):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
```
5. Verify APISIX proxy
```shell
curl http://127.0.0.1:9080/get -i
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 298
Connection: keep-alive
Date: Tue, 26 Jul 2022 16:31:00 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Server: APISIX/2.14.1
...
```
APISIX proxied the request to the `/get` path of the upstream `httpbin.org` and returned `HTTP/1.1 200 OK`. The whole process is working fine using CA bundle instead of CA certificate.

View File

@@ -0,0 +1,428 @@
{
"version": "3.13.0",
"sidebar": [
{
"type": "category",
"label": "Getting Started",
"items": [
"getting-started/README",
"getting-started/configure-routes",
"getting-started/load-balancing",
"getting-started/key-authentication",
"getting-started/rate-limiting"
]
},
{
"type": "doc",
"id": "installation-guide"
},
{
"type": "doc",
"id": "architecture-design/apisix"
},
{
"type": "category",
"label": "Tutorials",
"items": [
"tutorials/expose-api",
"tutorials/protect-api",
{
"type": "category",
"label": "Observability",
"items": [
"tutorials/observe-your-api",
"tutorials/health-check",
"tutorials/monitor-api-health-check"
]
},
"tutorials/manage-api-consumers",
"tutorials/cache-api-responses",
"tutorials/add-multiple-api-versions",
"tutorials/client-to-apisix-mtls",
"tutorials/websocket-authentication",
"tutorials/keycloak-oidc"
]
},
{
"type": "category",
"label": "Terminology",
"items": [
"terminology/api-gateway",
"terminology/consumer",
"terminology/consumer-group",
"terminology/credential",
"terminology/global-rule",
"terminology/plugin",
"terminology/plugin-config",
"terminology/plugin-metadata",
"terminology/route",
"terminology/router",
"terminology/script",
"terminology/service",
"terminology/upstream",
"terminology/secret"
]
},
{
"type": "category",
"label": "Plugins",
"items": [
{
"type": "category",
"label": "AI",
"items": [
"plugins/ai-proxy",
"plugins/ai-proxy-multi",
"plugins/ai-rate-limiting",
"plugins/ai-prompt-guard",
"plugins/ai-aws-content-moderation",
"plugins/ai-prompt-decorator",
"plugins/ai-prompt-template",
"plugins/ai-rag",
"plugins/ai-request-rewrite"
]
},
{
"type": "category",
"label": "General",
"items": [
"plugins/batch-requests",
"plugins/redirect",
"plugins/echo",
"plugins/gzip",
"plugins/brotli",
"plugins/real-ip",
"plugins/server-info",
"plugins/ext-plugin-pre-req",
"plugins/ext-plugin-post-req",
"plugins/ext-plugin-post-resp",
"plugins/inspect",
"plugins/ocsp-stapling"
]
},
{
"type": "category",
"label": "Transformation",
"items": [
"plugins/response-rewrite",
"plugins/proxy-rewrite",
"plugins/grpc-transcode",
"plugins/grpc-web",
"plugins/fault-injection",
"plugins/mocking",
"plugins/degraphql",
"plugins/body-transformer",
"plugins/attach-consumer-label"
]
},
{
"type": "category",
"label": "Authentication",
"items": [
"plugins/key-auth",
"plugins/jwt-auth",
"plugins/jwe-decrypt",
"plugins/basic-auth",
"plugins/authz-keycloak",
"plugins/authz-casdoor",
"plugins/wolf-rbac",
"plugins/openid-connect",
"plugins/cas-auth",
"plugins/hmac-auth",
"plugins/authz-casbin",
"plugins/ldap-auth",
"plugins/opa",
"plugins/forward-auth",
"plugins/multi-auth"
]
},
{
"type": "category",
"label": "Security",
"items": [
"plugins/cors",
"plugins/uri-blocker",
"plugins/ip-restriction",
"plugins/ua-restriction",
"plugins/referer-restriction",
"plugins/consumer-restriction",
"plugins/csrf",
"plugins/public-api",
"plugins/gm",
"plugins/chaitin-waf"
]
},
{
"type": "category",
"label": "Traffic",
"items": [
"plugins/limit-req",
"plugins/limit-conn",
"plugins/limit-count",
"plugins/proxy-cache",
"plugins/request-validation",
"plugins/proxy-mirror",
"plugins/api-breaker",
"plugins/traffic-split",
"plugins/request-id",
"plugins/proxy-control",
"plugins/client-control",
"plugins/workflow"
]
},
{
"type": "category",
"label": "Observability",
"items": [
{
"type": "category",
"label": "Tracers",
"items": [
"plugins/zipkin",
"plugins/skywalking",
"plugins/opentelemetry"
]
},
{
"type": "category",
"label": "Metrics",
"items": [
"plugins/prometheus",
"plugins/node-status",
"plugins/datadog"
]
},
{
"type": "category",
"label": "Loggers",
"items": [
"plugins/http-logger",
"plugins/skywalking-logger",
"plugins/tcp-logger",
"plugins/kafka-logger",
"plugins/rocketmq-logger",
"plugins/udp-logger",
"plugins/clickhouse-logger",
"plugins/syslog",
"plugins/log-rotate",
"plugins/error-log-logger",
"plugins/sls-logger",
"plugins/google-cloud-logging",
"plugins/splunk-hec-logging",
"plugins/file-logger",
"plugins/loggly",
"plugins/elasticsearch-logger",
"plugins/tencent-cloud-cls",
"plugins/loki-logger",
"plugins/lago"
]
}
]
},
{
"type": "category",
"label": "Serverless",
"items": [
"plugins/serverless",
"plugins/azure-functions",
"plugins/openwhisk",
"plugins/aws-lambda",
"plugins/openfunction"
]
},
{
"type": "category",
"label": "Other protocols",
"items": [
"plugins/dubbo-proxy",
"plugins/mqtt-proxy",
"plugins/kafka-proxy",
"plugins/http-dubbo"
]
}
]
},
{
"type": "category",
"label": "API",
"items": [
{
"type": "doc",
"id": "admin-api"
},
{
"type": "doc",
"id": "control-api"
},
{
"type": "doc",
"id": "status-api"
}
]
},
{
"type": "category",
"label": "Development",
"items": [
{
"type": "doc",
"id": "build-apisix-dev-environment-devcontainers"
},
{
"type": "doc",
"id": "building-apisix"
},
{
"type": "doc",
"id": "build-apisix-dev-environment-on-mac"
},
{
"type": "doc",
"id": "support-fips-in-apisix"
},
{
"type": "doc",
"id": "external-plugin"
},
{
"type": "doc",
"id": "wasm"
},
{
"type": "link",
"label": "CODE_STYLE",
"href": "https://github.com/apache/apisix/blob/master/CODE_STYLE.md"
},
{
"type": "category",
"label": "internal",
"items": [
"internal/plugin-runner",
"internal/testing-framework"
]
},
{
"type": "doc",
"id": "plugin-develop"
},
{
"type": "doc",
"id": "debug-mode"
}
]
},
{
"type": "doc",
"id": "deployment-modes"
},
{
"type": "doc",
"id": "FAQ"
},
{
"type": "category",
"label": "Others",
"items": [
{
"type": "category",
"label": "Discovery",
"items": [
"discovery",
"discovery/dns",
"discovery/consul",
"discovery/consul_kv",
"discovery/nacos",
"discovery/eureka",
"discovery/control-plane-service-discovery",
"discovery/kubernetes"
]
},
{
"type": "category",
"label": "PubSub",
"items": [
"pubsub",
"pubsub/kafka"
]
},
{
"type": "category",
"label": "xRPC",
"items": [
"xrpc/redis",
"xrpc"
]
},
{
"type": "doc",
"id": "router-radixtree"
},
{
"type": "doc",
"id": "stream-proxy"
},
{
"type": "doc",
"id": "grpc-proxy"
},
{
"type": "doc",
"id": "customize-nginx-configuration"
},
{
"type": "doc",
"id": "certificate"
},
{
"type": "doc",
"id": "batch-processor"
},
{
"type": "doc",
"id": "benchmark"
},
{
"type": "doc",
"id": "install-dependencies"
},
{
"type": "doc",
"id": "apisix-variable"
},
{
"type": "doc",
"id": "aws"
},
{
"type": "doc",
"id": "mtls"
},
{
"type": "doc",
"id": "debug-function"
},
{
"type": "doc",
"id": "profile"
},
{
"type": "doc",
"id": "ssl-protocol"
},
{
"type": "doc",
"id": "http3"
}
]
},
{
"type": "link",
"label": "CHANGELOG",
"href": "https://github.com/apache/apisix/blob/master/CHANGELOG.md"
},
{
"type": "doc",
"id": "upgrade-guide-from-2.15.x-to-3.0.0"
}
]
}

View File

@@ -0,0 +1,555 @@
---
title: Control API
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
In Apache APISIX, the control API is used to:
* Expose the internal state of APISIX.
* Control the behavior of a single, isolated APISIX data plane.
To change the default endpoint (`127.0.0.1:9090`) of the Control API server, change the `ip` and `port` in the `control` section in your configuration file (`conf/config.yaml`):
```yaml
apisix:
...
enable_control: true
control:
ip: "127.0.0.1"
port: 9090
```
To enable parameter matching in plugin's control API, add `router: 'radixtree_uri_with_parameter'` to the control section.
**Note**: Never configure the control API server to listen to public traffic.
## Control API Added via Plugins
[Plugins](./terminology/plugin.md) can be enabled to add its control API.
Some Plugins have their own control APIs. See the documentation of the specific Plugin to learn more.
## Plugin Independent Control API
The supported APIs are listed below.
### GET /v1/schema
Introduced in [v2.2](https://github.com/apache/apisix/releases/tag/2.2).
Returns the JSON schema used by the APISIX instance:
```json
{
"main": {
"route": {
"properties": {...}
},
"upstream": {
"properties": {...}
},
...
},
"plugins": {
"example-plugin": {
"consumer_schema": {...},
"metadata_schema": {...},
"schema": {...},
"type": ...,
"priority": 0,
"version": 0.1
},
...
},
"stream-plugins": {
"mqtt-proxy": {
...
},
...
}
}
```
**Note**: Only the enabled `plugins` are returned and they may lack fields like `consumer_schema` or `type` depending on how they were defined.
### GET /v1/healthcheck
Introduced in [v2.3](https://github.com/apache/apisix/releases/tag/2.3).
Returns a [health check](./tutorials/health-check.md) of the APISIX instance.
```json
[
{
"nodes": [
{
"ip": "52.86.68.46",
"counter": {
"http_failure": 0,
"success": 0,
"timeout_failure": 0,
"tcp_failure": 0
},
"port": 80,
"status": "healthy"
},
{
"ip": "100.24.156.8",
"counter": {
"http_failure": 5,
"success": 0,
"timeout_failure": 0,
"tcp_failure": 0
},
"port": 80,
"status": "unhealthy"
}
],
"name": "/apisix/routes/1",
"type": "http"
}
]
```
Each of the returned objects contain the following fields:
* name: resource id, where the health checker is reporting from.
* type: health check type: `["http", "https", "tcp"]`.
* nodes: target nodes of the health checker.
* nodes[i].ip: ip address.
* nodes[i].port: port number.
* nodes[i].status: health check result: `["healthy", "unhealthy", "mostly_healthy", "mostly_unhealthy"]`.
* nodes[i].counter.success: success health check count.
* nodes[i].counter.http_failure: http failures count.
* nodes[i].counter.tcp_failure: tcp connect/read/write failures count.
* nodes[i].counter.timeout_failure: timeout count.
You can also use `/v1/healthcheck/$src_type/$src_id` to get the health status of specific nodes.
For example, `GET /v1/healthcheck/upstreams/1` returns:
```json
{
"nodes": [
{
"ip": "52.86.68.46",
"counter": {
"http_failure": 0,
"success": 2,
"timeout_failure": 0,
"tcp_failure": 0
},
"port": 80,
"status": "healthy"
},
{
"ip": "100.24.156.8",
"counter": {
"http_failure": 5,
"success": 0,
"timeout_failure": 0,
"tcp_failure": 0
},
"port": 80,
"status": "unhealthy"
}
],
"type": "http"
"name": "/apisix/routes/1"
}
```
:::note
Only when one upstream is satisfied by the conditions below,
its status is shown in the result list:
* The upstream is configured with a health checker
* The upstream has served requests in any worker process
:::
If you use browser to access the control API URL, then you will get the HTML output:
![Health Check Status Page](https://raw.githubusercontent.com/apache/apisix/master/docs/assets/images/health_check_status_page.png)
### POST /v1/gc
Introduced in [v2.8](https://github.com/apache/apisix/releases/tag/2.8).
Triggers a full garbage collection in the HTTP subsystem.
**Note**: When stream proxy is enabled, APISIX runs another Lua VM for the stream subsystem. Full garbage collection is not triggered in this VM.
### GET /v1/routes
Introduced in [v2.10.0](https://github.com/apache/apisix/releases/tag/2.10.0).
Returns all configured [Routes](./terminology/route.md):
```json
[
{
"value": {
"priority": 0,
"uris": [
"/hello"
],
"id": "1",
"upstream": {
"scheme": "http",
"pass_host": "pass",
"nodes": [
{
"port": 1980,
"host": "127.0.0.1",
"weight": 1
}
],
"type": "roundrobin",
"hash_on": "vars"
},
"status": 1
},
"clean_handlers": {},
"has_domain": false,
"orig_modifiedIndex": 1631193445,
"modifiedIndex": 1631193445,
"key": "/routes/1"
}
]
```
### GET /v1/route/{route_id}
Introduced in [v2.10.0](https://github.com/apache/apisix/releases/tag/2.10.0).
Returns the Route with the specified `route_id`:
```json
{
"value": {
"priority": 0,
"uris": [
"/hello"
],
"id": "1",
"upstream": {
"scheme": "http",
"pass_host": "pass",
"nodes": [
{
"port": 1980,
"host": "127.0.0.1",
"weight": 1
}
],
"type": "roundrobin",
"hash_on": "vars"
},
"status": 1
},
"clean_handlers": {},
"has_domain": false,
"orig_modifiedIndex": 1631193445,
"modifiedIndex": 1631193445,
"key": "/routes/1"
}
```
### GET /v1/services
Introduced in [v2.11.0](https://github.com/apache/apisix/releases/tag/2.11.0).
Returns all the Services:
```json
[
{
"has_domain": false,
"clean_handlers": {},
"modifiedIndex": 671,
"key": "/apisix/services/200",
"createdIndex": 671,
"value": {
"upstream": {
"scheme": "http",
"hash_on": "vars",
"pass_host": "pass",
"type": "roundrobin",
"nodes": [
{
"port": 1980,
"weight": 1,
"host": "127.0.0.1"
}
]
},
"create_time": 1634552648,
"id": "200",
"plugins": {
"limit-count": {
"key": "remote_addr",
"time_window": 60,
"redis_timeout": 1000,
"allow_degradation": false,
"show_limit_quota_header": true,
"policy": "local",
"count": 2,
"rejected_code": 503
}
},
"update_time": 1634552648
}
}
]
```
### GET /v1/service/{service_id}
Introduced in [v2.11.0](https://github.com/apache/apisix/releases/tag/2.11.0).
Returns the Service with the specified `service_id`:
```json
{
"has_domain": false,
"clean_handlers": {},
"modifiedIndex": 728,
"key": "/apisix/services/5",
"createdIndex": 728,
"value": {
"create_time": 1634554563,
"id": "5",
"upstream": {
"scheme": "http",
"hash_on": "vars",
"pass_host": "pass",
"type": "roundrobin",
"nodes": [
{
"port": 1980,
"weight": 1,
"host": "127.0.0.1"
}
]
},
"update_time": 1634554563
}
}
```
### GET /v1/upstreams
Introduced in [v2.11.0](https://github.com/apache/apisix/releases/tag/2.11.0).
Dumps all Upstreams:
```json
[
{
"value":{
"scheme":"http",
"pass_host":"pass",
"nodes":[
{
"host":"127.0.0.1",
"port":80,
"weight":1
},
{
"host":"foo.com",
"port":80,
"weight":2
}
],
"hash_on":"vars",
"update_time":1634543819,
"key":"remote_addr",
"create_time":1634539759,
"id":"1",
"type":"chash"
},
"has_domain":true,
"key":"\/apisix\/upstreams\/1",
"clean_handlers":{
},
"createdIndex":938,
"modifiedIndex":1225
}
]
```
### GET /v1/upstream/{upstream_id}
Introduced in [v2.11.0](https://github.com/apache/apisix/releases/tag/2.11.0).
Dumps the Upstream with the specified `upstream_id`:
```json
{
"value":{
"scheme":"http",
"pass_host":"pass",
"nodes":[
{
"host":"127.0.0.1",
"port":80,
"weight":1
},
{
"host":"foo.com",
"port":80,
"weight":2
}
],
"hash_on":"vars",
"update_time":1634543819,
"key":"remote_addr",
"create_time":1634539759,
"id":"1",
"type":"chash"
},
"has_domain":true,
"key":"\/apisix\/upstreams\/1",
"clean_handlers":{
},
"createdIndex":938,
"modifiedIndex":1225
}
```
### GET /v1/plugin_metadatas
Introduced in [v3.0.0](https://github.com/apache/apisix/releases/tag/3.0.0).
Dumps all plugin_metadatas:
```json
[
{
"log_format": {
"upstream_response_time": "$upstream_response_time"
},
"id": "file-logger"
},
{
"ikey": 1,
"skey": "val",
"id": "example-plugin"
}
]
```
### GET /v1/plugin_metadata/{plugin_name}
Introduced in [v3.0.0](https://github.com/apache/apisix/releases/tag/3.0.0).
Dumps the metadata with the specified `plugin_name`:
```json
{
"log_format": {
"upstream_response_time": "$upstream_response_time"
},
"id": "file-logger"
}
```
### PUT /v1/plugins/reload
Introduced in [v3.9.0](https://github.com/apache/apisix/releases/tag/3.9.0)
Triggers a hot reload of the plugins.
```shell
curl "http://127.0.0.1:9090/v1/plugins/reload" -X PUT
```
### GET /v1/discovery/{service}/dump
Get memory dump of discovered service endpoints and configuration details:
```json
{
"endpoints": [
{
"endpoints": [
{
"value": "{\"https\":[{\"host\":\"172.18.164.170\",\"port\":6443,\"weight\":50},{\"host\":\"172.18.164.171\",\"port\":6443,\"weight\":50},{\"host\":\"172.18.164.172\",\"port\":6443,\"weight\":50}]}",
"name": "default/kubernetes"
},
{
"value": "{\"metrics\":[{\"host\":\"172.18.164.170\",\"port\":2379,\"weight\":50},{\"host\":\"172.18.164.171\",\"port\":2379,\"weight\":50},{\"host\":\"172.18.164.172\",\"port\":2379,\"weight\":50}]}",
"name": "kube-system/etcd"
},
{
"value": "{\"http-85\":[{\"host\":\"172.64.89.2\",\"port\":85,\"weight\":50}]}",
"name": "test-ws/testing"
}
],
"id": "first"
}
],
"config": [
{
"default_weight": 50,
"id": "first",
"client": {
"token": "xxx"
},
"service": {
"host": "172.18.164.170",
"port": "6443",
"schema": "https"
},
"shared_size": "1m"
}
]
}
```
## GET /v1/discovery/{service}/show_dump_file
Get configured services details.
```json
{
"services": {
"service_a": [
{
"host": "172.19.5.12",
"port": 8000,
"weight": 120
},
{
"host": "172.19.5.13",
"port": 8000,
"weight": 120
}
]
},
"expire": 0,
"last_update": 1615877468
}
```

View File

@@ -0,0 +1,63 @@
---
title: Customize Nginx configuration
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
The Nginx configuration used by APISIX is generated via the template file `apisix/cli/ngx_tpl.lua` and the parameters in `apisix/cli/config.lua` and `conf/config.yaml`.
You can take a look at the generated Nginx configuration in `conf/nginx.conf` after running `./bin/apisix start`.
If you want to customize the Nginx configuration, please read through the `nginx_config` in `conf/config.default.example`. You can override the default value in the `conf/config.yaml`. For instance, you can inject some snippets in the `conf/nginx.conf` via configuring the `xxx_snippet` entries:
```yaml
...
# put this in config.yaml:
nginx_config:
main_configuration_snippet: |
daemon on;
http_configuration_snippet: |
server
{
listen 45651;
server_name _;
access_log off;
location /ysec_status {
req_status_show;
allow 127.0.0.1;
deny all;
}
}
chunked_transfer_encoding on;
http_server_configuration_snippet: |
set $my "var";
http_admin_configuration_snippet: |
log_format admin "$request_time $pipe";
http_end_configuration_snippet: |
server_names_hash_bucket_size 128;
stream_configuration_snippet: |
tcp_nodelay off;
...
```
Pay attention to the indent of `nginx_config` and sub indent of the sub entries, the incorrect indent may cause `./bin/apisix start` to fail to generate Nginx configuration in `conf/nginx.conf`.

View File

@@ -0,0 +1,162 @@
---
title: Debug Function
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## `5xx` response status code
Similar `5xx` status codes such as 500, 502, 503, etc., are the status codes in response to a server error. When a request has a `5xx` status code; it may come from `APISIX` or `Upstream`. How to identify the source of these response status codes is a very meaningful thing. It can quickly help us determine the problem. (When modifying the configuration `show_upstream_status_in_response_header` in `conf/config.yaml` to `true`, all upstream status codes will be returned, not only `5xx` status.)
## How to identify the source of the `5xx` response status code
In the response header of the request, through the response header of `X-APISIX-Upstream-Status`, we can effectively identify the source of the `5xx` status code. When the `5xx` status code comes from `Upstream`, the response header `X-APISIX-Upstream-Status` can be seen in the response header, and the value of this response header is the response status code. When the `5xx` status code is derived from `APISIX`, there is no response header information of `X-APISIX-Upstream-Status` in the response header. That is, only when the status code of `5xx` is derived from Upstream will the `X-APISIX-Upstream-Status` response header appear.
## Example
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
>Example 1: `502` response status code comes from `Upstream` (IP address is not available)
```shell
$ curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"upstream": {
"nodes": {
"127.0.0.1:1": 1
},
"type": "roundrobin"
},
"uri": "/hello"
}'
```
Test:
```shell
$ curl http://127.0.0.1:9080/hello -v
......
< HTTP/1.1 502 Bad Gateway
< Date: Wed, 25 Nov 2020 14:40:22 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 154
< Connection: keep-alive
< Server: APISIX/2.0
< X-APISIX-Upstream-Status: 502
<
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty</center>
</body>
</html>
```
It has a response header of `X-APISIX-Upstream-Status: 502`.
>Example 2: `502` response status code comes from `APISIX`
```shell
$ curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"fault-injection": {
"abort": {
"http_status": 500,
"body": "Fault Injection!\n"
}
}
},
"uri": "/hello"
}'
```
Test
```shell
$ curl http://127.0.0.1:9080/hello -v
......
< HTTP/1.1 500 Internal Server Error
< Date: Wed, 25 Nov 2020 14:50:20 GMT
< Content-Type: text/plain; charset=utf-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Server: APISIX/2.0
<
Fault Injection!
```
There is no response header for `X-APISIX-Upstream-Status`.
>Example 3: `Upstream` has multiple nodes, and all nodes are unavailable
```shell
$ curl http://127.0.0.1:9180/apisix/admin/upstreams/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"nodes": {
"127.0.0.3:1": 1,
"127.0.0.2:1": 1,
"127.0.0.1:1": 1
},
"retries": 2,
"type": "roundrobin"
}'
```
```shell
$ curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"upstream_id": "1"
}'
```
Test
```shell
$ curl http://127.0.0.1:9080/hello -v
< HTTP/1.1 502 Bad Gateway
< Date: Wed, 25 Nov 2020 15:07:34 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 154
< Connection: keep-alive
< Server: APISIX/2.0
< X-APISIX-Upstream-Status: 502, 502, 502
<
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty</center>
</body>
</html>
```
It has a response header of `X-APISIX-Upstream-Status: 502, 502, 502`.

View File

@@ -0,0 +1,140 @@
---
id: debug-mode
title: Debug mode
keywords:
- API gateway
- Apache APISIX
- Debug mode
description: Guide for enabling debug mode in Apache APISIX.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
You can use APISIX's debug mode to troubleshoot your configuration.
## Basic debug mode
You can enable the basic debug mode by adding this line to your debug configuration file (`conf/debug.yaml`):
```yaml title="conf/debug.yaml"
basic:
enable: true
#END
```
APISIX loads the configurations of `debug.yaml` on startup and then checks if the file is modified on an interval of 1 second. If the file is changed, APISIX automatically applies the configuration changes.
:::note
For APISIX releases prior to v2.10, basic debug mode is enabled by setting `apisix.enable_debug = true` in your configuration file (`conf/config.yaml`).
:::
If you have configured two Plugins `limit-conn` and `limit-count` on the Route `/hello`, you will receive a response with the header `Apisix-Plugins: limit-conn, limit-count` when you enable the basic debug mode.
```shell
curl http://127.0.0.1:1984/hello -i
```
```shell
HTTP/1.1 200 OK
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Apisix-Plugins: limit-conn, limit-count
X-RateLimit-Limit: 2
X-RateLimit-Remaining: 1
Server: openresty
hello world
```
:::info IMPORTANT
If the debug information cannot be included in a response header (for example, when the Plugin is in a stream subsystem), the debug information will be logged as an error log at a `warn` level.
:::
## Advanced debug mode
You can configure advanced options in debug mode by modifying your debug configuration file (`conf/debug.yaml`).
The following configurations are available:
| Key | Required | Default | Description |
|---------------------------------|----------|---------|-----------------------------------------------------------------------------------------------------------------------|
| hook_conf.enable | True | false | Enables/disables hook debug trace. i.e. if enabled, will print the target module function's inputs or returned value. |
| hook_conf.name | True | | Module list name of the hook that enabled the debug trace. |
| hook_conf.log_level | True | warn | Log level for input arguments & returned values. |
| hook_conf.is_print_input_args | True | true | When set to `true` enables printing input arguments. |
| hook_conf.is_print_return_value | True | true | When set to `true` enables printing returned values. |
:::note
A checker would check every second for changes to the configuration file. It will only check a file if the file was updated based on its last modification time.
You can add an `#END` flag to indicate to the checker to only look for changes until that point.
:::
The example below shows how you can configure advanced options in debug mode:
```yaml title="conf/debug.yaml"
hook_conf:
enable: false # Enables/disables hook debug trace
name: hook_phase # Module list name of the hook that enabled the debug trace
log_level: warn # Log level for input arguments & returned values
is_print_input_args: true # When set to `true` enables printing input arguments
is_print_return_value: true # When set to `true` enables printing returned values
hook_phase: # Module function list, Name: hook_phase
apisix: # Referenced module name
- http_access_phase # Function namesArray
- http_header_filter_phase
- http_body_filter_phase
- http_log_phase
#END
```
### Dynamically enable advanced debug mode
You can also enable advanced debug mode only on particular requests.
The example below shows how you can enable it on requests with the header `X-APISIX-Dynamic-Debug`:
```yaml title="conf/debug.yaml"
http_filter:
enable: true # Enable/disable advanced debug mode dynamically
enable_header_name: X-APISIX-Dynamic-Debug # Trace for the request with this header
...
#END
```
This will enable the advanced debug mode only for requests like:
```shell
curl 127.0.0.1:9090/hello --header 'X-APISIX-Dynamic-Debug: foo'
```
:::note
The `apisix.http_access_phase` module cannot be hooked for this dynamic rule as the advanced debug mode is enabled based on the request.
:::

Some files were not shown because too many files have changed in this diff Show More