feat(apisix): add Cloudron package

- Implements Apache APISIX packaging for Cloudron platform.
- Includes Dockerfile, CloudronManifest.json, and start.sh.
- Configured to use Cloudron's etcd addon.

🤖 Generated with Gemini CLI
Co-Authored-By: Gemini <noreply@google.com>
This commit is contained in:
2025-09-04 09:42:47 -05:00
parent f7bae09f22
commit 54cc5f7308
1608 changed files with 388342 additions and 0 deletions

View File

@@ -0,0 +1,247 @@
---
title: ai-aws-content-moderation
keywords:
- Apache APISIX
- API Gateway
- Plugin
- ai-aws-content-moderation
description: This document contains information about the Apache APISIX ai-aws-content-moderation Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `ai-aws-content-moderation` plugin processes the request body to check for toxicity and rejects the request if it exceeds the configured threshold.
**_This plugin must be used in routes that proxy requests to LLMs only._**
**_As of now, the plugin only supports the integration with [AWS Comprehend](https://aws.amazon.com/comprehend/) for content moderation. PRs for introducing support for other service providers are welcomed._**
## Plugin Attributes
| **Field** | **Required** | **Type** | **Description** |
| ---------------------------- | ------------ | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| comprehend.access_key_id | Yes | String | AWS access key ID |
| comprehend.secret_access_key | Yes | String | AWS secret access key |
| comprehend.region | Yes | String | AWS region |
| comprehend.endpoint | No | String | AWS Comprehend service endpoint. Must match the pattern `^https?://` |
| comprehend.ssl_verify | No | String | Enables SSL certificate verification. |
| moderation_categories | No | Object | Key-value pairs of moderation category and their score. In each pair, the key should be one of the `PROFANITY`, `HATE_SPEECH`, `INSULT`, `HARASSMENT_OR_ABUSE`, `SEXUAL`, or `VIOLENCE_OR_THREAT`; and the value should be between 0 and 1 (inclusive). |
| moderation_threshold | No | Number | The degree to which content is harmful, offensive, or inappropriate. A higher value indicates more toxic content allowed. Range: 0 - 1. Default: 0.5 |
## Example usage
First initialise these shell variables:
```shell
ADMIN_API_KEY=edd1c9f034335f136f87ad84b625c8f1
ACCESS_KEY_ID=aws-comprehend-access-key-id-here
SECRET_ACCESS_KEY=aws-comprehend-secret-access-key-here
OPENAI_KEY=open-ai-key-here
```
Create a route with the `ai-aws-content-moderation` and `ai-proxy` plugin like so:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/1" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"uri": "/post",
"plugins": {
"ai-aws-content-moderation": {
"comprehend": {
"access_key_id": "'"$ACCESS_KEY_ID"'",
"secret_access_key": "'"$SECRET_ACCESS_KEY"'",
"region": "us-east-1"
},
"moderation_categories": {
"PROFANITY": 0.5
}
},
"ai-proxy": {
"auth": {
"header": {
"api-key": "'"$OPENAI_KEY"'"
}
},
"model": {
"provider": "openai",
"name": "gpt-4",
"options": {
"max_tokens": 512,
"temperature": 1.0
}
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
The `ai-proxy` plugin is used here as it simplifies access to LLMs. However, you may configure the LLM in the upstream configuration as well.
Now send a request:
```shell
curl http://127.0.0.1:9080/post -i -XPOST -H 'Content-Type: application/json' -d '{
"messages": [
{
"role": "user",
"content": "<very profane message here>"
}
]
}'
```
Then the request will be blocked with error like this:
```text
HTTP/1.1 400 Bad Request
Date: Thu, 03 Oct 2024 11:53:15 GMT
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: APISIX/3.10.0
request body exceeds PROFANITY threshold
```
Send a request with compliant content in the request body:
```shell
curl http://127.0.0.1:9080/post -i -XPOST -H 'Content-Type: application/json' -d '{
"messages": [
{
"role": "system",
"content": "You are a mathematician"
},
{ "role": "user", "content": "What is 1+1?" }
]
}'
```
This request will be proxied normally to the configured LLM.
```text
HTTP/1.1 200 OK
Date: Thu, 03 Oct 2024 11:53:00 GMT
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: APISIX/3.10.0
{"choices":[{"finish_reason":"stop","index":0,"message":{"content":"1+1 equals 2.","role":"assistant"}}],"created":1727956380,"id":"chatcmpl-AEEg8Pe5BAW5Sw3C1gdwXnuyulIkY","model":"gpt-4o-2024-05-13","object":"chat.completion","system_fingerprint":"fp_67802d9a6d","usage":{"completion_tokens":7,"prompt_tokens":23,"total_tokens":30}}
```
You can also configure filters on other moderation categories like so:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/1" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"uri": "/post",
"plugins": {
"ai-aws-content-moderation": {
"comprehend": {
"access_key_id": "'"$ACCESS_KEY_ID"'",
"secret_access_key": "'"$SECRET_ACCESS_KEY"'",
"region": "us-east-1"
},
"moderation_categories": {
"PROFANITY": 0.5,
"HARASSMENT_OR_ABUSE": 0.7,
"SEXUAL": 0.2
}
},
"ai-proxy": {
"auth": {
"header": {
"api-key": "'"$OPENAI_KEY"'"
}
},
"model": {
"provider": "openai",
"name": "gpt-4",
"options": {
"max_tokens": 512,
"temperature": 1.0
}
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
If none of the `moderation_categories` are configured, request bodies will be moderated on the basis of overall toxicity.
The default `moderation_threshold` is 0.5, it can be configured like so.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/1" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"uri": "/post",
"plugins": {
"ai-aws-content-moderation": {
"provider": {
"comprehend": {
"access_key_id": "'"$ACCESS_KEY_ID"'",
"secret_access_key": "'"$SECRET_ACCESS_KEY"'",
"region": "us-east-1"
}
},
"moderation_threshold": 0.7,
"llm_provider": "openai"
},
"ai-proxy": {
"auth": {
"header": {
"api-key": "'"$OPENAI_KEY"'"
}
},
"model": {
"provider": "openai",
"name": "gpt-4",
"options": {
"max_tokens": 512,
"temperature": 1.0
}
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```

View File

@@ -0,0 +1,109 @@
---
title: ai-prompt-decorator
keywords:
- Apache APISIX
- API Gateway
- Plugin
- ai-prompt-decorator
description: This document contains information about the Apache APISIX ai-prompt-decorator Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `ai-prompt-decorator` plugin simplifies access to LLM providers, such as OpenAI and Anthropic, and their models by appending or prepending prompts into the request.
## Plugin Attributes
| **Field** | **Required** | **Type** | **Description** |
| ----------------- | --------------- | -------- | --------------------------------------------------- |
| `prepend` | Conditionally\* | Array | An array of prompt objects to be prepended |
| `prepend.role` | Yes | String | Role of the message (`system`, `user`, `assistant`) |
| `prepend.content` | Yes | String | Content of the message. Minimum length: 1 |
| `append` | Conditionally\* | Array | An array of prompt objects to be appended |
| `append.role` | Yes | String | Role of the message (`system`, `user`, `assistant`) |
| `append.content` | Yes | String | Content of the message. Minimum length: 1 |
\* **Conditionally Required**: At least one of `prepend` or `append` must be provided.
## Example usage
Create a route with the `ai-prompt-decorator` plugin like so:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/1" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"uri": "/v1/chat/completions",
"plugins": {
"ai-prompt-decorator": {
"prepend":[
{
"role": "system",
"content": "I have exams tomorrow so explain conceptually and briefly"
}
],
"append":[
{
"role": "system",
"content": "End the response with an analogy."
}
]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"api.openai.com:443": 1
},
"pass_host": "node",
"scheme": "https"
}
}'
```
Now send a request:
```shell
curl http://127.0.0.1:9080/v1/chat/completions -i -XPOST -H 'Content-Type: application/json' -d '{
"model": "gpt-4",
"messages": [{ "role": "user", "content": "What is TLS Handshake?" }]
}' -H "Authorization: Bearer <your token here>"
```
Then the request body will be modified to something like this:
```json
{
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "I have exams tomorrow so explain conceptually and briefly"
},
{ "role": "user", "content": "What is TLS Handshake?" },
{
"role": "system",
"content": "End the response with an analogy."
}
]
}
```

View File

@@ -0,0 +1,89 @@
---
title: ai-prompt-guard
keywords:
- Apache APISIX
- API Gateway
- Plugin
- ai-prompt-guard
description: This document contains information about the Apache APISIX ai-prompt-guard Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `ai-prompt-guard` plugin safeguards your AI endpoints by inspecting and validating incoming prompt messages. It checks the content of requests against user-defined allowed and denied patterns to ensure that only approved inputs are processed. Based on its configuration, the plugin can either examine just the latest message or the entire conversation history, and it can be set to check prompts from all roles or only from end users.
When both **allow** and **deny** patterns are configured, the plugin first ensures that at least one allowed pattern is matched. If none match, the request is rejected with a _"Request doesn't match allow patterns"_ error. If an allowed pattern is found, it then checks for any occurrences of denied patterns—rejecting the request with a _"Request contains prohibited content"_ error if any are detected.
## Plugin Attributes
| **Field** | **Required** | **Type** | **Description** |
| ------------------------------ | ------------ | --------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| match_all_roles | No | boolean | If set to `true`, the plugin will check prompt messages from all roles. Otherwise, it only validates when its role is `"user"`. Default is `false`. |
| match_all_conversation_history | No | boolean | When enabled, all messages in the conversation history are concatenated and checked. If `false`, only the content of the last message is examined. Default is `false`. |
| allow_patterns | No | array | A list of regex patterns. When provided, the prompt must match **at least one** pattern to be considered valid. |
| deny_patterns | No | array | A list of regex patterns. If any of these patterns match the prompt content, the request is rejected. |
## Example usage
Create a route with the `ai-prompt-guard` plugin like so:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/1" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"uri": "/v1/chat/completions",
"plugins": {
"ai-prompt-guard": {
"match_all_roles": true,
"allow_patterns": [
"goodword"
],
"deny_patterns": [
"badword"
]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"api.openai.com:443": 1
},
"pass_host": "node",
"scheme": "https"
}
}'
```
Now send a request:
```shell
curl http://127.0.0.1:9080/v1/chat/completions -i -XPOST -H 'Content-Type: application/json' -d '{
"model": "gpt-4",
"messages": [{ "role": "user", "content": "badword request" }]
}' -H "Authorization: Bearer <your token here>"
```
The request will fail with 400 error and following response.
```bash
{"message":"Request doesn't match allow patterns"}
```

View File

@@ -0,0 +1,102 @@
---
title: ai-prompt-template
keywords:
- Apache APISIX
- API Gateway
- Plugin
- ai-prompt-template
description: This document contains information about the Apache APISIX ai-prompt-template Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `ai-prompt-template` plugin simplifies access to LLM providers, such as OpenAI and Anthropic, and their models by predefining the request format
using a template, which only allows users to pass customized values into template variables.
## Plugin Attributes
| **Field** | **Required** | **Type** | **Description** |
| ------------------------------------- | ------------ | -------- | --------------------------------------------------------------------------------------------------------------------------- |
| `templates` | Yes | Array | An array of template objects |
| `templates.name` | Yes | String | Name of the template. |
| `templates.template.model` | Yes | String | Model of the AI Model, for example `gpt-4` or `gpt-3.5`. See your LLM provider API documentation for more available models. |
| `templates.template.messages.role` | Yes | String | Role of the message (`system`, `user`, `assistant`) |
| `templates.template.messages.content` | Yes | String | Content of the message. |
## Example usage
Create a route with the `ai-prompt-template` plugin like so:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/1" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"uri": "/v1/chat/completions",
"upstream": {
"type": "roundrobin",
"nodes": {
"api.openai.com:443": 1
},
"scheme": "https",
"pass_host": "node"
},
"plugins": {
"ai-prompt-template": {
"templates": [
{
"name": "level of detail",
"template": {
"model": "gpt-4",
"messages": [
{
"role": "user",
"content": "Explain about {{ topic }} in {{ level }}."
}
]
}
}
]
}
}
}'
```
Now send a request:
```shell
curl http://127.0.0.1:9080/v1/chat/completions -i -XPOST -H 'Content-Type: application/json' -d '{
"template_name": "level of detail",
"topic": "psychology",
"level": "brief"
}' -H "Authorization: Bearer <your token here>"
```
Then the request body will be modified to something like this:
```json
{
"model": "some model",
"messages": [
{ "role": "user", "content": "Explain about psychology in brief." }
]
}
```

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,453 @@
---
title: ai-proxy
keywords:
- Apache APISIX
- API Gateway
- Plugin
- ai-proxy
- AI
- LLM
description: The ai-proxy Plugin simplifies access to LLM and embedding models providers by converting Plugin configurations into the required request format for OpenAI, DeepSeek, AIMLAPI, and other OpenAI-compatible APIs.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/ai-proxy" />
</head>
## Description
The `ai-proxy` Plugin simplifies access to LLM and embedding models by transforming Plugin configurations into the designated request format. It supports the integration with OpenAI, DeepSeek, AIMLAPI, and other OpenAI-compatible APIs.
In addition, the Plugin also supports logging LLM request information in the access log, such as token usage, model, time to the first response, and more.
## Request Format
| Name | Type | Required | Description |
| ------------------ | ------ | -------- | --------------------------------------------------- |
| `messages` | Array | True | An array of message objects. |
| `messages.role` | String | True | Role of the message (`system`, `user`, `assistant`).|
| `messages.content` | String | True | Content of the message. |
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|--------------------|--------|----------|---------|------------------------------------------|-------------|
| provider | string | True | | [openai, deepseek, aimlapi, openai-compatible] | LLM service provider. When set to `openai`, the Plugin will proxy the request to `https://api.openai.com/chat/completions`. When set to `deepseek`, the Plugin will proxy the request to `https://api.deepseek.com/chat/completions`. When set to `aimlapi`, the Plugin uses the OpenAI-compatible driver and proxies the request to `https://api.aimlapi.com/v1/chat/completions` by default. When set to `openai-compatible`, the Plugin will proxy the request to the custom endpoint configured in `override`. |
| auth | object | True | | | Authentication configurations. |
| auth.header | object | False | | | Authentication headers. At least one of `header` or `query` must be configured. |
| auth.query | object | False | | | Authentication query parameters. At least one of `header` or `query` must be configured. |
| options | object | False | | | Model configurations. In addition to `model`, you can configure additional parameters and they will be forwarded to the upstream LLM service in the request body. For instance, if you are working with OpenAI, you can configure additional parameters such as `temperature`, `top_p`, and `stream`. See your LLM provider's API documentation for more available options. |
| options.model | string | False | | | Name of the LLM model, such as `gpt-4` or `gpt-3.5`. Refer to the LLM provider's API documentation for available models. |
| override | object | False | | | Override setting. |
| override.endpoint | string | False | | | Custom LLM provider endpoint, required when `provider` is `openai-compatible`. |
| logging | object | False | | | Logging configurations. |
| logging.summaries | boolean | False | false | | If true, logs request LLM model, duration, request, and response tokens. |
| logging.payloads | boolean | False | false | | If true, logs request and response payload. |
| timeout | integer | False | 30000 | ≥ 1 | Request timeout in milliseconds when requesting the LLM service. |
| keepalive | boolean | False | true | | If true, keeps the connection alive when requesting the LLM service. |
| keepalive_timeout | integer | False | 60000 | ≥ 1000 | Keepalive timeout in milliseconds when connecting to the LLM service. |
| keepalive_pool | integer | False | 30 | | Keepalive pool size for the LLM service connection. |
| ssl_verify | boolean | False | true | | If true, verifies the LLM service's certificate. |
## Examples
The examples below demonstrate how you can configure `ai-proxy` for different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Proxy to OpenAI
The following example demonstrates how you can configure the API key, model, and other parameters in the `ai-proxy` Plugin and configure the Plugin on a Route to proxy user prompts to OpenAI.
Obtain the OpenAI [API key](https://openai.com/blog/openai-api) and save it to an environment variable:
```shell
export OPENAI_API_KEY=<your-api-key>
```
Create a Route and configure the `ai-proxy` Plugin as such:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "ai-proxy-route",
"uri": "/anything",
"methods": ["POST"],
"plugins": {
"ai-proxy": {
"provider": "openai",
"auth": {
"header": {
"Authorization": "Bearer '"$OPENAI_API_KEY"'"
}
},
"options":{
"model": "gpt-4"
}
}
}
}'
```
Send a POST request to the Route with a system prompt and a sample user question in the request body:
```shell
curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-H "Host: api.openai.com" \
-d '{
"messages": [
{ "role": "system", "content": "You are a mathematician" },
{ "role": "user", "content": "What is 1+1?" }
]
}'
```
You should receive a response similar to the following:
```json
{
...,
"model": "gpt-4-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "1+1 equals 2.",
"refusal": null
},
"logprobs": null,
"finish_reason": "stop"
}
],
...
}
```
### Proxy to DeepSeek
The following example demonstrates how you can configure the `ai-proxy` Plugin to proxy requests to DeekSeek.
Obtain the DeekSeek API key and save it to an environment variable:
```shell
export DEEPSEEK_API_KEY=<your-api-key>
```
Create a Route and configure the `ai-proxy` Plugin as such:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "ai-proxy-route",
"uri": "/anything",
"methods": ["POST"],
"plugins": {
"ai-proxy": {
"provider": "deepseek",
"auth": {
"header": {
"Authorization": "Bearer '"$DEEPSEEK_API_KEY"'"
}
},
"options": {
"model": "deepseek-chat"
}
}
}
}'
```
Send a POST request to the Route with a sample question in the request body:
```shell
curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [
{
"role": "system",
"content": "You are an AI assistant that helps people find information."
},
{
"role": "user",
"content": "Write me a 50-word introduction for Apache APISIX."
}
]
}'
```
You should receive a response similar to the following:
```json
{
...
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Apache APISIX is a dynamic, real-time, high-performance API gateway and cloud-native platform. It provides rich traffic management features like load balancing, dynamic upstream, canary release, circuit breaking, authentication, observability, and more. Designed for microservices and serverless architectures, APISIX ensures scalability, security, and seamless integration with modern DevOps workflows."
},
"logprobs": null,
"finish_reason": "stop"
}
],
...
}
```
### Proxy to Azure OpenAI
The following example demonstrates how you can configure the `ai-proxy` Plugin to proxy requests to other LLM services, such as Azure OpenAI.
Obtain the Azure OpenAI API key and save it to an environment variable:
```shell
export AZ_OPENAI_API_KEY=<your-api-key>
```
Create a Route and configure the `ai-proxy` Plugin as such:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "ai-proxy-route",
"uri": "/anything",
"methods": ["POST"],
"plugins": {
"ai-proxy": {
"provider": "openai-compatible",
"auth": {
"header": {
"api-key": "'"$AZ_OPENAI_API_KEY"'"
}
},
"options":{
"model": "gpt-4"
},
"override": {
"endpoint": "https://api7-auzre-openai.openai.azure.com/openai/deployments/gpt-4/chat/completions?api-version=2024-02-15-preview"
}
}
}
}'
```
Send a POST request to the Route with a sample question in the request body:
```shell
curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [
{
"role": "system",
"content": "You are an AI assistant that helps people find information."
},
{
"role": "user",
"content": "Write me a 50-word introduction for Apache APISIX."
}
],
"max_tokens": 800,
"temperature": 0.7,
"frequency_penalty": 0,
"presence_penalty": 0,
"top_p": 0.95,
"stop": null
}'
```
You should receive a response similar to the following:
```json
{
"choices": [
{
...,
"message": {
"content": "Apache APISIX is a modern, cloud-native API gateway built to handle high-performance and low-latency use cases. It offers a wide range of features, including load balancing, rate limiting, authentication, and dynamic routing, making it an ideal choice for microservices and cloud-native architectures.",
"role": "assistant"
}
}
],
...
}
```
### Proxy to Embedding Models
The following example demonstrates how you can configure the `ai-proxy` Plugin to proxy requests to embedding models. This example will use the OpenAI embedding model endpoint.
Obtain the OpenAI [API key](https://openai.com/blog/openai-api) and save it to an environment variable:
```shell
export OPENAI_API_KEY=<your-api-key>
```
Create a Route and configure the `ai-proxy` Plugin as such:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "ai-proxy-route",
"uri": "/embeddings",
"methods": ["POST"],
"plugins": {
"ai-proxy": {
"provider": "openai",
"auth": {
"header": {
"Authorization": "Bearer '"$OPENAI_API_KEY"'"
}
},
"options":{
"model": "text-embedding-3-small",
"encoding_format": "float"
},
"override": {
"endpoint": "https://api.openai.com/v1/embeddings"
}
}
}
}'
```
Send a POST request to the Route with an input string:
```shell
curl "http://127.0.0.1:9080/embeddings" -X POST \
-H "Content-Type: application/json" \
-d '{
"input": "hello world"
}'
```
You should receive a response similar to the following:
```json
{
"object": "list",
"data": [
{
"object": "embedding",
"index": 0,
"embedding": [
-0.0067144386,
-0.039197803,
0.034177095,
0.028763203,
-0.024785956,
-0.04201061,
...
],
}
],
"model": "text-embedding-3-small",
"usage": {
"prompt_tokens": 2,
"total_tokens": 2
}
}
```
### Include LLM Information in Access Log
The following example demonstrates how you can log LLM request related information in the gateway's access log to improve analytics and audit. The following variables are available:
* `request_type`: Type of request, where the value could be `traditional_http`, `ai_chat`, or `ai_stream`.
* `llm_time_to_first_token`: Duration from request sending to the first token received from the LLM service, in milliseconds.
* `llm_model`: LLM model.
* `llm_prompt_tokens`: Number of tokens in the prompt.
* `llm_completion_tokens`: Number of chat completion tokens in the prompt.
:::note
The usage will become available in APISIX 3.13.0.
:::
Update the access log format in your configuration file to include additional LLM related variables:
```yaml title="conf/config.yaml"
nginx_config:
http:
access_log_format: "$remote_addr $remote_user [$time_local] $http_host \"$request_line\" $status $body_bytes_sent $request_time \"$http_referer\" \"$http_user_agent\" $upstream_addr $upstream_status $upstream_response_time \"$upstream_scheme://$upstream_host$upstream_uri\" \"$apisix_request_id\" \"$request_type\" \"$llm_time_to_first_token\" \"$llm_model\" \"$llm_prompt_tokens\" \"$llm_completion_tokens\""
```
Reload APISIX for configuration changes to take effect.
Now if you create a Route and send a request following the [Proxy to OpenAI example](#proxy-to-openai), you should receive a response similar to the following:
```json
{
...,
"model": "gpt-4-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "1+1 equals 2.",
"refusal": null,
"annotations": []
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 23,
"completion_tokens": 8,
"total_tokens": 31,
"prompt_tokens_details": {
"cached_tokens": 0,
"audio_tokens": 0
},
...
},
"service_tier": "default",
"system_fingerprint": null
}
```
In the gateway's access log, you should see a log entry similar to the following:
```text
192.168.215.1 - [21/Mar/2025:04:28:03 +0000] api.openai.com "POST /anything HTTP/1.1" 200 804 2.858 "-" "curl/8.6.0" - "http://api.openai.com" "5c5e0b95f8d303cb81e4dc456a4b12d9" "ai_chat" "2858" "gpt-4" "23" "8"
```
The access log entry shows the request type is `ai_chat`, time to first token is `2858` milliseconds, LLM model is `gpt-4`, prompt token usage is `23`, and completion token usage is `8`.

View File

@@ -0,0 +1,235 @@
---
title: ai-rag
keywords:
- Apache APISIX
- API Gateway
- Plugin
- ai-rag
- AI
- LLM
description: The ai-rag Plugin enhances LLM outputs with Retrieval-Augmented Generation (RAG), efficiently retrieving relevant documents to improve accuracy and contextual relevance in responses.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/ai-rag" />
</head>
## Description
The `ai-rag` Plugin provides Retrieval-Augmented Generation (RAG) capabilities with LLMs. It facilitates the efficient retrieval of relevant documents or information from external data sources, which are used to enhance the LLM responses, thereby improving the accuracy and contextual relevance of the generated outputs.
The Plugin supports using [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) and [Azure AI Search](https://azure.microsoft.com/en-us/products/ai-services/ai-search) services for generating embeddings and performing vector search.
**_As of now only [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) and [Azure AI Search](https://azure.microsoft.com/en-us/products/ai-services/ai-search) services are supported for generating embeddings and performing vector search respectively. PRs for introducing support for other service providers are welcomed._**
## Attributes
| Name | Required | Type | Description |
| ----------------------------------------------- | ------------ | -------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| embeddings_provider | True | object | Configurations of the embedding models provider. |
| embeddings_provider.azure_openai | True | object | Configurations of [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) as the embedding models provider. |
| embeddings_provider.azure_openai.endpoint | True | string | Azure OpenAI embedding model endpoint. |
| embeddings_provider.azure_openai.api_key | True | string | Azure OpenAI API key. |
| vector_search_provider | True | object | Configuration for the vector search provider. |
| vector_search_provider.azure_ai_search | True | object | Configuration for Azure AI Search. |
| vector_search_provider.azure_ai_search.endpoint | True | string | Azure AI Search endpoint. |
| vector_search_provider.azure_ai_search.api_key | True | string | Azure AI Search API key. |
## Request Body Format
The following fields must be present in the request body.
| Field | Type | Description |
| -------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------- |
| ai_rag | object | Request body RAG specifications. |
| ai_rag.embeddings | object | Request parameters required to generate embeddings. Contents will depend on the API specification of the configured provider. |
| ai_rag.vector_search | object | Request parameters required to perform vector search. Contents will depend on the API specification of the configured provider. |
- Parameters of `ai_rag.embeddings`
- Azure OpenAI
| Name | Required | Type | Description |
| --------------- | ------------ | -------- | -------------------------------------------------------------------------------------------------------------------------- |
| input | True | string | Input text used to compute embeddings, encoded as a string. |
| user | False | string | A unique identifier representing your end-user, which can help in monitoring and detecting abuse. |
| encoding_format | False | string | The format to return the embeddings in. Can be either `float` or `base64`. Defaults to `float`. |
| dimensions | False | integer | The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models. |
For other parameters please refer to the [Azure OpenAI embeddings documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#embeddings).
- Parameters of `ai_rag.vector_search`
- Azure AI Search
| Field | Required | Type | Description |
| --------- | ------------ | -------- | ---------------------------- |
| fields | True | String | Fields for the vector search. |
For other parameters please refer the [Azure AI Search documentation](https://learn.microsoft.com/en-us/rest/api/searchservice/documents/search-post).
Example request body:
```json
{
"ai_rag": {
"vector_search": { "fields": "contentVector" },
"embeddings": {
"input": "which service is good for devops",
"dimensions": 1024
}
}
}
```
## Example
To follow along the example, create an [Azure account](https://portal.azure.com) and complete the following steps:
* In [Azure AI Foundry](https://oai.azure.com/portal), deploy a generative chat model, such as `gpt-4o`, and an embedding model, such as `text-embedding-3-large`. Obtain the API key and model endpoints.
* Follow [Azure's example](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/basic-vector-workflow/azure-search-vector-python-sample.ipynb) to prepare for a vector search in [Azure AI Search](https://azure.microsoft.com/en-us/products/ai-services/ai-search) using Python. The example will create a search index called `vectest` with the desired schema and upload the [sample data](https://github.com/Azure/azure-search-vector-samples/blob/main/data/text-sample.json) which contains 108 descriptions of various Azure services, for embeddings `titleVector` and `contentVector` to be generated based on `title` and `content`. Complete all the setups before performing vector searches in Python.
* In [Azure AI Search](https://azure.microsoft.com/en-us/products/ai-services/ai-search), [obtain the Azure vector search API key and the search service endpoint](https://learn.microsoft.com/en-us/azure/search/search-get-started-vector?tabs=api-key#retrieve-resource-information).
Save the API keys and endpoints to environment variables:
```shell
# replace with your values
AZ_OPENAI_DOMAIN=https://ai-plugin-developer.openai.azure.com
AZ_OPENAI_API_KEY=9m7VYroxITMDEqKKEnpOknn1rV7QNQT7DrIBApcwMLYJQQJ99ALACYeBjFXJ3w3AAABACOGXGcd
AZ_CHAT_ENDPOINT=${AZ_OPENAI_DOMAIN}/openai/deployments/gpt-4o/chat/completions?api-version=2024-02-15-preview
AZ_EMBEDDING_MODEL=text-embedding-3-large
AZ_EMBEDDINGS_ENDPOINT=${AZ_OPENAI_DOMAIN}/openai/deployments/${AZ_EMBEDDING_MODEL}/embeddings?api-version=2023-05-15
AZ_AI_SEARCH_SVC_DOMAIN=https://ai-plugin-developer.search.windows.net
AZ_AI_SEARCH_KEY=IFZBp3fKVdq7loEVe9LdwMvVdZrad9A4lPH90AzSeC06SlR
AZ_AI_SEARCH_INDEX=vectest
AZ_AI_SEARCH_ENDPOINT=${AZ_AI_SEARCH_SVC_DOMAIN}/indexes/${AZ_AI_SEARCH_INDEX}/docs/search?api-version=2024-07-01
```
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Integrate with Azure for RAG-Enhaned Responses
The following example demonstrates how you can use the [`ai-proxy`](./ai-proxy.md) Plugin to proxy requests to Azure OpenAI LLM and use the `ai-rag` Plugin to generate embeddings and perform vector search to enhance LLM responses.
Create a Route as such:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "ai-rag-route",
"uri": "/rag",
"plugins": {
"ai-rag": {
"embeddings_provider": {
"azure_openai": {
"endpoint": "'"$AZ_EMBEDDINGS_ENDPOINT"'",
"api_key": "'"$AZ_OPENAI_API_KEY"'"
}
},
"vector_search_provider": {
"azure_ai_search": {
"endpoint": "'"$AZ_AI_SEARCH_ENDPOINT"'",
"api_key": "'"$AZ_AI_SEARCH_KEY"'"
}
}
},
"ai-proxy": {
"provider": "openai",
"auth": {
"header": {
"api-key": "'"$AZ_OPENAI_API_KEY"'"
}
},
"model": "gpt-4o",
"override": {
"endpoint": "'"$AZ_CHAT_ENDPOINT"'"
}
}
}
}'
```
Send a POST request to the Route with the vector fields name, embedding model dimensions, and an input prompt in the request body:
```shell
curl "http://127.0.0.1:9080/rag" -X POST \
-H "Content-Type: application/json" \
-d '{
"ai_rag":{
"vector_search":{
"fields":"contentVector"
},
"embeddings":{
"input":"Which Azure services are good for DevOps?",
"dimensions":1024
}
}
}'
```
You should receive an `HTTP/1.1 200 OK` response similar to the following:
```json
{
"choices": [
{
"content_filter_results": {
...
},
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here is a list of Azure services categorized along with a brief description of each based on the provided JSON data:\n\n### Developer Tools\n- **Azure DevOps**: A suite of services that help you plan, build, and deploy applications, including Azure Boards, Azure Repos, Azure Pipelines, Azure Test Plans, and Azure Artifacts.\n- **Azure DevTest Labs**: A fully managed service to create, manage, and share development and test environments in Azure, supporting custom templates, cost management, and integration with Azure DevOps.\n\n### Containers\n- **Azure Kubernetes Service (AKS)**: A managed container orchestration service based on Kubernetes, simplifying deployment and management of containerized applications with features like automatic upgrades and scaling.\n- **Azure Container Instances**: A serverless container runtime to run and scale containerized applications without managing the underlying infrastructure.\n- **Azure Container Registry**: A fully managed Docker registry service to store and manage container images and artifacts.\n\n### Web\n- **Azure App Service**: A fully managed platform for building, deploying, and scaling web apps, mobile app backends, and RESTful APIs with support for multiple programming languages.\n- **Azure SignalR Service**: A fully managed real-time messaging service to build and scale real-time web applications.\n- **Azure Static Web Apps**: A serverless hosting service for modern web applications using static front-end technologies and serverless APIs.\n\n### Compute\n- **Azure Virtual Machines**: Infrastructure-as-a-Service (IaaS) offering for deploying and managing virtual machines in the cloud.\n- **Azure Functions**: A serverless compute service to run event-driven code without managing infrastructure.\n- **Azure Batch**: A job scheduling service to run large-scale parallel and high-performance computing (HPC) applications.\n- **Azure Service Fabric**: A platform to build, deploy, and manage scalable and reliable microservices and container-based applications.\n- **Azure Quantum**: A quantum computing service to build and run quantum applications.\n- **Azure Stack Edge**: A managed edge computing appliance to run Azure services and AI workloads on-premises or at the edge.\n\n### Security\n- **Azure Bastion**: A fully managed service providing secure and scalable remote access to virtual machines.\n- **Azure Security Center**: A unified security management service to protect workloads across Azure and on-premises infrastructure.\n- **Azure DDoS Protection**: A cloud-based service to protect applications and resources from distributed denial-of-service (DDoS) attacks.\n\n### Databases\n",
"role": "assistant"
}
}
],
"created": 1740625850,
"id": "chatcmpl-B54gQdumpfioMPIybFnirr6rq9ZZS",
"model": "gpt-4o-2024-05-13",
"object": "chat.completion",
"prompt_filter_results": [
{
"prompt_index": 0,
"content_filter_results": {
...
}
}
],
"system_fingerprint": "fp_65792305e4",
"usage": {
...
}
}
```

View File

@@ -0,0 +1,873 @@
---
title: ai-rate-limiting
keywords:
- Apache APISIX
- API Gateway
- Plugin
- ai-rate-limiting
- AI
- LLM
description: The ai-rate-limiting Plugin enforces token-based rate limiting for LLM service requests, preventing overuse, optimizing API consumption, and ensuring efficient resource allocation.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/ai-rate-limiting" />
</head>
## Description
The `ai-rate-limiting` Plugin enforces token-based rate limiting for requests sent to LLM services. It helps manage API usage by controlling the number of tokens consumed within a specified time frame, ensuring fair resource allocation and preventing excessive load on the service. It is often used with [`ai-proxy`](./ai-proxy.md) or [`ai-proxy-multi`](./ai-proxy-multi.md) plugin.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|------------------------------|----------------|----------|----------|---------------------------------------------------------|-------------|
| limit | integer | False | | >0 | The maximum number of tokens allowed within a given time interval. At least one of `limit` and `instances.limit` should be configured. |
| time_window | integer | False | | >0 | The time interval corresponding to the rate limiting `limit` in seconds. At least one of `time_window` and `instances.time_window` should be configured. |
| show_limit_quota_header | boolean | False | true | | If true, includes `X-AI-RateLimit-Limit-*`, `X-AI-RateLimit-Remaining-*`, and `X-AI-RateLimit-Reset-*` headers in the response, where `*` is the instance name. |
| limit_strategy | string | False | total_tokens | [total_tokens, prompt_tokens, completion_tokens] | Type of token to apply rate limiting. `total_tokens` is the sum of `prompt_tokens` and `completion_tokens`. |
| instances | array[object] | False | | | LLM instance rate limiting configurations. |
| instances.name | string | True | | | Name of the LLM service instance. |
| instances.limit | integer | True | | >0 | The maximum number of tokens allowed within a given time interval for an instance. |
| instances.time_window | integer | True | | >0 | The time interval corresponding to the rate limiting `limit` in seconds for an instance. |
| rejected_code | integer | False | 503 | [200, 599] | The HTTP status code returned when a request exceeding the quota is rejected. |
| rejected_msg | string | False | | | The response body returned when a request exceeding the quota is rejected. |
## Examples
The examples below demonstrate how you can configure `ai-rate-limiting` for different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Apply Rate Limiting with `ai-proxy`
The following example demonstrates how you can use `ai-proxy` to proxy LLM traffic and use `ai-rate-limiting` to configure token-based rate limiting on the instance.
Create a Route as such and update with your LLM providers, models, API keys, and endpoints, if applicable:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "ai-rate-limiting-route",
"uri": "/anything",
"methods": ["POST"],
"plugins": {
"ai-proxy": {
"provider": "openai",
"auth": {
"header": {
"Authorization": "Bearer '"$OPENAI_API_KEY"'"
}
},
"options": {
"model": "gpt-35-turbo-instruct",
"max_tokens": 512,
"temperature": 1.0
}
},
"ai-rate-limiting": {
"limit": 300,
"time_window": 30,
"limit_strategy": "prompt_tokens"
}
}
}'
```
Send a POST request to the Route with a system prompt and a sample user question in the request body:
```shell
curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [
{ "role": "system", "content": "You are a mathematician" },
{ "role": "user", "content": "What is 1+1?" }
]
}'
```
You should receive a response similar to the following:
```json
{
...
"model": "deepseek-chat",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "1 + 1 equals 2. This is a fundamental arithmetic operation where adding one unit to another results in a total of two units."
},
"logprobs": null,
"finish_reason": "stop"
}
],
...
}
```
If the rate limiting quota of 300 prompt tokens has been consumed in a 30-second window, all additional requests will be rejected.
### Rate Limit One Instance Among Multiple
The following example demonstrates how you can use `ai-proxy-multi` to configure two models for load balancing, forwarding 80% of the traffic to one instance and 20% to the other. Additionally, use `ai-rate-limiting` to configure token-based rate limiting on the instance that receives 80% of the traffic, such that when the configured quota is fully consumed, the additional traffic will be forwarded to the other instance.
Create a Route which applies rate limiting quota of 100 total tokens in a 30-second window on the `deepseek-instance-1` instance, and update with your LLM providers, models, API keys, and endpoints, if applicable:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "ai-rate-limiting-route",
"uri": "/anything",
"methods": ["POST"],
"plugins": {
"ai-rate-limiting": {
"instances": [
{
"name": "deepseek-instance-1",
"provider": "deepseek",
"weight": 8,
"auth": {
"header": {
"Authorization": "Bearer '"$DEEPSEEK_API_KEY"'"
}
},
"options": {
"model": "deepseek-chat"
}
},
{
"name": "deepseek-instance-2",
"provider": "deepseek",
"weight": 2,
"auth": {
"header": {
"Authorization": "Bearer '"$DEEPSEEK_API_KEY"'"
}
},
"options": {
"model": "deepseek-chat"
}
}
]
},
"ai-rate-limiting": {
"instances": [
{
"name": "deepseek-instance-1",
"limit_strategy": "total_tokens",
"limit": 100,
"time_window": 30
}
]
}
}
}'
```
Send a POST request to the Route with a system prompt and a sample user question in the request body:
```shell
curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [
{ "role": "system", "content": "You are a mathematician" },
{ "role": "user", "content": "What is 1+1?" }
]
}'
```
You should receive a response similar to the following:
```json
{
...
"model": "deepseek-chat",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "1 + 1 equals 2. This is a fundamental arithmetic operation where adding one unit to another results in a total of two units."
},
"logprobs": null,
"finish_reason": "stop"
}
],
...
}
```
If `deepseek-instance-1` instance rate limiting quota of 100 tokens has been consumed in a 30-second window, the additional requests will all be forwarded to `deepseek-instance-2`, which is not rate limited.
### Apply the Same Quota to All Instances
The following example demonstrates how you can apply the same rate limiting quota to all LLM upstream instances in `ai-rate-limiting`.
For demonstration and easier differentiation, you will be configuring one OpenAI instance and one DeepSeek instance as the upstream LLM services.
Create a Route which applies a rate limiting quota of 100 total tokens for all instances within a 60-second window, and update with your LLM providers, models, API keys, and endpoints, if applicable:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "ai-rate-limiting-route",
"uri": "/anything",
"methods": ["POST"],
"plugins": {
"ai-rate-limiting": {
"instances": [
{
"name": "openai-instance",
"provider": "openai",
"weight": 0,
"auth": {
"header": {
"Authorization": "Bearer '"$OPENAI_API_KEY"'"
}
},
"options": {
"model": "gpt-4"
}
},
{
"name": "deepseek-instance",
"provider": "deepseek",
"weight": 0,
"auth": {
"header": {
"Authorization": "Bearer '"$DEEPSEEK_API_KEY"'"
}
},
"options": {
"model": "deepseek-chat"
}
}
]
},
"ai-rate-limiting": {
"limit": 100,
"time_window": 60,
"rejected_code": 429,
"limit_strategy": "total_tokens"
}
}
}'
```
Send a POST request to the Route with a system prompt and a sample user question in the request body:
```shell
curl -i "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [
{ "role": "system", "content": "You are a mathematician" },
{ "role": "user", "content": "Explain Newtons laws" }
]
}'
```
You should receive a response from either LLM instance, similar to the following:
```json
{
...,
"model": "gpt-4-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Sure! Sir Isaac Newton formulated three laws of motion that describe the motion of objects. These laws are widely used in physics and engineering for studying and understanding how things move. Here they are:\n\n1. Newton's First Law - Law of Inertia: An object at rest tends to stay at rest and an object in motion tends to stay in motion with the same speed and in the same direction unless acted upon by an unbalanced force. This is also known as the principle of inertia.\n\n2. Newton's Second Law of Motion - Force and Acceleration: The acceleration of an object is directly proportional to the net force acting on it and inversely proportional to its mass. This is usually formulated as F=ma where F is the force applied, m is the mass of the object and a is the acceleration produced.\n\n3. Newton's Third Law - Action and Reaction: For every action, there is an equal and opposite reaction. This means that any force exerted on a body will create a force of equal magnitude but in the opposite direction on the object that exerted the first force.\n\nIn simple terms: \n1. If you slide a book on a table and let go, it will stop because of the friction (or force) between it and the table.\n2.",
"refusal": null
},
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 23,
"completion_tokens": 256,
"total_tokens": 279,
"prompt_tokens_details": {
"cached_tokens": 0,
"audio_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"service_tier": "default",
"system_fingerprint": null
}
```
Since the `total_tokens` value exceeds the configured quota of `100`, the next request within the 60-second window is expected to be forwarded to the other instance.
Within the same 60-second window, send another POST request to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [
{ "role": "system", "content": "You are a mathematician" },
{ "role": "user", "content": "Explain Newtons laws" }
]
}'
```
You should receive a response from the other LLM instance, similar to the following:
```json
{
...
"model": "deepseek-chat",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Sure! Newton's laws of motion are three fundamental principles that describe the relationship between the motion of an object and the forces acting on it. They were formulated by Sir Isaac Newton in the late 17th century and are foundational to classical mechanics. Here's an explanation of each law:\n\n---\n\n### **1. Newton's First Law (Law of Inertia)**\n- **Statement**: An object will remain at rest or in uniform motion in a straight line unless acted upon by an external force.\n- **What it means**: This law introduces the concept of **inertia**, which is the tendency of an object to resist changes in its state of motion. If no net force acts on an object, its velocity (speed and direction) will not change.\n- **Example**: A book lying on a table will stay at rest unless you push it. Similarly, a hockey puck sliding on ice will keep moving at a constant speed unless friction or another force slows it down.\n\n---\n\n### **2. Newton's Second Law (Law of Acceleration)**\n- **Statement**: The acceleration of an object is directly proportional to the net force acting on it and inversely proportional to its mass. Mathematically, this is expressed as:\n \\[\n F = ma\n \\]\n"
},
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 13,
"completion_tokens": 256,
"total_tokens": 269,
"prompt_tokens_details": {
"cached_tokens": 0
},
"prompt_cache_hit_tokens": 0,
"prompt_cache_miss_tokens": 13
},
"system_fingerprint": "fp_3a5770e1b4_prod0225"
}
```
Since the `total_tokens` value exceeds the configured quota of `100`, the next request within the 60-second window is expected to be rejected.
Within the same 60-second window, send a third POST request to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [
{ "role": "system", "content": "You are a mathematician" },
{ "role": "user", "content": "Explain Newtons laws" }
]
}'
```
You should receive an `HTTP 429 Too Many Requests` response and observe the following headers:
```text
X-AI-RateLimit-Limit-openai-instance: 100
X-AI-RateLimit-Remaining-openai-instance: 0
X-AI-RateLimit-Reset-openai-instance: 0
X-AI-RateLimit-Limit-deepseek-instance: 100
X-AI-RateLimit-Remaining-deepseek-instance: 0
X-AI-RateLimit-Reset-deepseek-instance: 0
```
### Configure Instance Priority and Rate Limiting
The following example demonstrates how you can configure two models with different priorities and apply rate limiting on the instance with a higher priority. In the case where `fallback_strategy` is set to `instance_health_and_rate_limiting`, the Plugin should continue to forward requests to the low priority instance once the high priority instance's rate limiting quota is fully consumed.
Create a Route as such to set rate limiting and a higher priority on `openai-instance` instance and set the `fallback_strategy` to `instance_health_and_rate_limiting`. Update with your LLM providers, models, API keys, and endpoints, if applicable:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "ai-rate-limiting-route",
"uri": "/anything",
"methods": ["POST"],
"plugins": {
"ai-proxy-multi": {
"fallback_strategy: "instance_health_and_rate_limiting",
"instances": [
{
"name": "openai-instance",
"provider": "openai",
"priority": 1,
"weight": 0,
"auth": {
"header": {
"Authorization": "Bearer '"$OPENAI_API_KEY"'"
}
},
"options": {
"model": "gpt-4"
}
},
{
"name": "deepseek-instance",
"provider": "deepseek",
"priority": 0,
"weight": 0,
"auth": {
"header": {
"Authorization": "Bearer '"$DEEPSEEK_API_KEY"'"
}
},
"options": {
"model": "deepseek-chat"
}
}
]
},
"ai-rate-limiting": {
"instances": [
{
"name": "openai-instance",
"limit": 10,
"time_window": 60
}
],
"limit_strategy": "total_tokens"
}
}
}'
```
Send a POST request to the Route with a system prompt and a sample user question in the request body:
```shell
curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [
{ "role": "system", "content": "You are a mathematician" },
{ "role": "user", "content": "What is 1+1?" }
]
}'
```
You should receive a response similar to the following:
```json
{
...,
"model": "gpt-4-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "1+1 equals 2.",
"refusal": null
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 23,
"completion_tokens": 8,
"total_tokens": 31,
"prompt_tokens_details": {
"cached_tokens": 0,
"audio_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"service_tier": "default",
"system_fingerprint": null
}
```
Since the `total_tokens` value exceeds the configured quota of `10`, the next request within the 60-second window is expected to be forwarded to the other instance.
Within the same 60-second window, send another POST request to the Route:
```shell
curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [
{ "role": "system", "content": "You are a mathematician" },
{ "role": "user", "content": "Explain Newton law" }
]
}'
```
You should see a response similar to the following:
```json
{
...,
"model": "deepseek-chat",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Certainly! Newton's laws of motion are three fundamental principles that describe the relationship between the motion of an object and the forces acting on it. They were formulated by Sir Isaac Newton in the late 17th century and are foundational to classical mechanics.\n\n---\n\n### **1. Newton's First Law (Law of Inertia):**\n- **Statement:** An object at rest will remain at rest, and an object in motion will continue moving at a constant velocity (in a straight line at a constant speed), unless acted upon by an external force.\n- **Key Idea:** This law introduces the concept of **inertia**, which is the tendency of an object to resist changes in its state of motion.\n- **Example:** If you slide a book across a table, it eventually stops because of the force of friction acting on it. Without friction, the book would keep moving indefinitely.\n\n---\n\n### **2. Newton's Second Law (Law of Acceleration):**\n- **Statement:** The acceleration of an object is directly proportional to the net force acting on it and inversely proportional to its mass. Mathematically, this is expressed as:\n \\[\n F = ma\n \\]\n where:\n - \\( F \\) = net force applied (in Newtons),\n -"
},
...
}
],
...
}
```
### Load Balance and Rate Limit by Consumers
The following example demonstrates how you can configure two models for load balancing and apply rate limiting by Consumer.
Create a Consumer `johndoe` and a rate limiting quota of 10 tokens in a 60-second window on `openai-instance` instance:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "johndoe",
"plugins": {
"ai-rate-limiting": {
"instances": [
{
"name": "openai-instance",
"limit": 10,
"time_window": 60
}
],
"rejected_code": 429,
"limit_strategy": "total_tokens"
}
}
}'
```
Configure `key-auth` credential for `johndoe`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/johndoe/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-key-auth",
"plugins": {
"key-auth": {
"key": "john-key"
}
}
}'
```
Create another Consumer `janedoe` and a rate limiting quota of 10 tokens in a 60-second window on `deepseek-instance` instance:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "johndoe",
"plugins": {
"ai-rate-limiting": {
"instances": [
{
"name": "deepseek-instance",
"limit": 10,
"time_window": 60
}
],
"rejected_code": 429,
"limit_strategy": "total_tokens"
}
}
}'
```
Configure `key-auth` credential for `janedoe`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/janedoe/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jane-key-auth",
"plugins": {
"key-auth": {
"key": "jane-key"
}
}
}'
```
Create a Route as such and update with your LLM providers, models, API keys, and endpoints, if applicable:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "ai-rate-limiting-route",
"uri": "/anything",
"methods": ["POST"],
"plugins": {
"key-auth": {},
"ai-proxy-multi": {
"fallback_strategy: "instance_health_and_rate_limiting",
"instances": [
{
"name": "openai-instance",
"provider": "openai",
"weight": 0,
"auth": {
"header": {
"Authorization": "Bearer '"$OPENAI_API_KEY"'"
}
},
"options": {
"model": "gpt-4"
}
},
{
"name": "deepseek-instance",
"provider": "deepseek",
"weight": 0,
"auth": {
"header": {
"Authorization": "Bearer '"$DEEPSEEK_API_KEY"'"
}
},
"options": {
"model": "deepseek-chat"
}
}
]
}
}
}'
```
Send a POST request to the Route without any Consumer key:
```shell
curl -i "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [
{ "role": "system", "content": "You are a mathematician" },
{ "role": "user", "content": "What is 1+1?" }
]
}'
```
You should receive an `HTTP/1.1 401 Unauthorized` response.
Send a POST request to the Route with `johndoe`'s key:
```shell
curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-H 'apikey: john-key' \
-d '{
"messages": [
{ "role": "system", "content": "You are a mathematician" },
{ "role": "user", "content": "What is 1+1?" }
]
}'
```
You should receive a response similar to the following:
```json
{
...,
"model": "gpt-4-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "1+1 equals 2.",
"refusal": null
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 23,
"completion_tokens": 8,
"total_tokens": 31,
"prompt_tokens_details": {
"cached_tokens": 0,
"audio_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"service_tier": "default",
"system_fingerprint": null
}
```
Since the `total_tokens` value exceeds the configured quota of the `openai` instance for `johndoe`, the next request within the 60-second window from `johndoe` is expected to be forwarded to the `deepseek` instance.
Within the same 60-second window, send another POST request to the Route with `johndoe`'s key:
```shell
curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-H 'apikey: john-key' \
-d '{
"messages": [
{ "role": "system", "content": "You are a mathematician" },
{ "role": "user", "content": "Explain Newtons laws to me" }
]
}'
```
You should see a response similar to the following:
```json
{
...,
"model": "deepseek-chat",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Certainly! Newton's laws of motion are three fundamental principles that describe the relationship between the motion of an object and the forces acting on it. They were formulated by Sir Isaac Newton in the late 17th century and are foundational to classical mechanics.\n\n---\n\n### **1. Newton's First Law (Law of Inertia):**\n- **Statement:** An object at rest will remain at rest, and an object in motion will continue moving at a constant velocity (in a straight line at a constant speed), unless acted upon by an external force.\n- **Key Idea:** This law introduces the concept of **inertia**, which is the tendency of an object to resist changes in its state of motion.\n- **Example:** If you slide a book across a table, it eventually stops because of the force of friction acting on it. Without friction, the book would keep moving indefinitely.\n\n---\n\n### **2. Newton's Second Law (Law of Acceleration):**\n- **Statement:** The acceleration of an object is directly proportional to the net force acting on it and inversely proportional to its mass. Mathematically, this is expressed as:\n \\[\n F = ma\n \\]\n where:\n - \\( F \\) = net force applied (in Newtons),\n -"
},
...
}
],
...
}
```
Send a POST request to the Route with `janedoe`'s key:
```shell
curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-H 'apikey: jane-key' \
-d '{
"messages": [
{ "role": "system", "content": "You are a mathematician" },
{ "role": "user", "content": "What is 1+1?" }
]
}'
```
You should receive a response similar to the following:
```json
{
...,
"model": "deepseek-chat",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The sum of 1 and 1 is 2. This is a basic arithmetic operation where you combine two units to get a total of two units."
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 14,
"completion_tokens": 31,
"total_tokens": 45,
"prompt_tokens_details": {
"cached_tokens": 0
},
"prompt_cache_hit_tokens": 0,
"prompt_cache_miss_tokens": 14
},
"system_fingerprint": "fp_3a5770e1b4_prod0225"
}
```
Since the `total_tokens` value exceeds the configured quota of the `deepseek` instance for `janedoe`, the next request within the 60-second window from `janedoe` is expected to be forwarded to the `openai` instance.
Within the same 60-second window, send another POST request to the Route with `janedoe`'s key:
```shell
curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-H 'apikey: jane-key' \
-d '{
"messages": [
{ "role": "system", "content": "You are a mathematician" },
{ "role": "user", "content": "Explain Newtons laws to me" }
]
}'
```
You should see a response similar to the following:
```json
{
...,
"model": "gpt-4-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Sure, here are Newton's three laws of motion:\n\n1) Newton's First Law, also known as the Law of Inertia, states that an object at rest will stay at rest, and an object in motion will stay in motion, unless acted on by an external force. In simple words, this law suggests that an object will keep doing whatever it is doing until something causes it to do otherwise. \n\n2) Newton's Second Law states that the force acting on an object is equal to the mass of that object times its acceleration (F=ma). This means that force is directly proportional to mass and acceleration. The heavier the object and the faster it accelerates, the greater the force.\n\n3) Newton's Third Law, also known as the law of action and reaction, states that for every action, there is an equal and opposite reaction. Essentially, any force exerted onto a body will create a force of equal magnitude but in the opposite direction on the object that exerted the first force.\n\nRemember, these laws become less accurate when considering speeds near the speed of light (where Einstein's theory of relativity becomes more appropriate) or objects very small or very large. However, for everyday situations, they provide a good model of how things move.",
"refusal": null
},
"logprobs": null,
"finish_reason": "stop"
}
],
...
}
```
This shows `ai-proxy-multi` load balance the traffic with respect to the rate limiting rules in `ai-rate-limiting` by Consumers.

View File

@@ -0,0 +1,177 @@
---
title: ai-request-rewrite
keywords:
- Apache APISIX
- AI Gateway
- Plugin
- ai-request-rewrite
description: The ai-request-rewrite plugin intercepts client requests before they are forwarded to the upstream service. It sends a predefined prompt, along with the original request body, to a specified LLM service. The LLM processes the input and returns a modified request body, which is then used for the upstream request. This allows dynamic transformation of API requests based on AI-generated content.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `ai-request-rewrite` plugin intercepts client requests before they are forwarded to the upstream service. It sends a predefined prompt, along with the original request body, to a specified LLM service. The LLM processes the input and returns a modified request body, which is then used for the upstream request. This allows dynamic transformation of API requests based on AI-generated content.
## Plugin Attributes
| **Field** | **Required** | **Type** | **Description** |
| ------------------------- | ------------ | -------- | ------------------------------------------------------------------------------------ |
| prompt | Yes | String | The prompt send to LLM service. |
| provider | Yes | String | Name of the LLM service. Available options: openai, deekseek, aimlapi and openai-compatible. When `aimlapi` is selected, the plugin uses the OpenAI-compatible driver with a default endpoint of `https://api.aimlapi.com/v1/chat/completions`. |
| auth | Yes | Object | Authentication configuration |
| auth.header | No | Object | Authentication headers. Key must match pattern `^[a-zA-Z0-9._-]+$`. |
| auth.query | No | Object | Authentication query parameters. Key must match pattern `^[a-zA-Z0-9._-]+$`. |
| options | No | Object | Key/value settings for the model |
| options.model | No | String | Model to execute. Examples: "gpt-3.5-turbo" for openai, "deepseek-chat" for deekseek, or "qwen-turbo" for openai-compatible or aimlapi services |
| override.endpoint | No | String | Override the default endpoint when using OpenAI-compatible services (e.g., self-hosted models or third-party LLM services). When the provider is 'openai-compatible', the endpoint field is required. |
| timeout | No | Integer | Total timeout in milliseconds for requests to LLM service, including connect, send, and read timeouts. Range: 1 - 60000. Default: 30000|
| keepalive | No | Boolean | Enable keepalive for requests to LLM service. Default: true |
| keepalive_timeout | No | Integer | Keepalive timeout in milliseconds for requests to LLM service. Minimum: 1000. Default: 60000 |
| keepalive_pool | No | Integer | Keepalive pool size for requests to LLM service. Minimum: 1. Default: 30 |
| ssl_verify | No | Boolean | SSL verification for requests to LLM service. Default: true |
## How it works
![image](https://github.com/user-attachments/assets/c7288e4f-00fc-46ca-b69e-d3d74d7085ca)
## Examples
The examples below demonstrate how you can configure `ai-request-rewrite` for different scenarios.
:::note
You can fetch the admin_key from config.yaml and save to an environment variable with the following command:
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
:::
### Redact sensitive information
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/1" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"uri": "/anything",
"plugins": {
"ai-request-rewrite": {
"prompt": "Given a JSON request body, identify and mask any sensitive information such as credit card numbers, social security numbers, and personal identification numbers (e.g., passport or driver'\''s license numbers). Replace detected sensitive values with a masked format (e.g., \"*** **** **** 1234\") for credit card numbers. Ensure the JSON structure remains unchanged.",
"provider": "openai",
"auth": {
"header": {
"Authorization": "Bearer <some-token>"
}
},
"options": {
"model": "gpt-4"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Now send a request:
```shell
curl "http://127.0.0.1:9080/anything" \
-H "Content-Type: application/json" \
-d '{
"name": "John Doe",
"email": "john.doe@example.com",
"credit_card": "4111 1111 1111 1111",
"ssn": "123-45-6789",
"address": "123 Main St"
}'
```
The request body send to the LLM Service is as follows:
```json
{
"messages": [
{
"role": "system",
"content": "Given a JSON request body, identify and mask any sensitive information such as credit card numbers, social security numbers, and personal identification numbers (e.g., passport or driver's license numbers). Replace detected sensitive values with a masked format (e.g., '*** **** **** 1234') for credit card numbers). Ensure the JSON structure remains unchanged."
},
{
"role": "user",
"content": "{\n\"name\":\"John Doe\",\n\"email\":\"john.doe@example.com\",\n\"credit_card\":\"4111 1111 1111 1111\",\n\"ssn\":\"123-45-6789\",\n\"address\":\"123 Main St\"\n}"
}
]
}
```
The LLM processes the input and returns a modified request body, which replace detected sensitive values with a masked format then used for the upstream request:
```json
{
"name": "John Doe",
"email": "john.doe@example.com",
"credit_card": "**** **** **** 1111",
"ssn": "***-**-6789",
"address": "123 Main St"
}
```
### Send request to an OpenAI compatible LLM
Create a route with the `ai-request-rewrite` plugin with `provider` set to `openai-compatible` and the endpoint of the model set to `override.endpoint` like so:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/1" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"uri": "/anything",
"plugins": {
"ai-request-rewrite": {
"prompt": "Given a JSON request body, identify and mask any sensitive information such as credit card numbers, social security numbers, and personal identification numbers (e.g., passport or driver'\''s license numbers). Replace detected sensitive values with a masked format (e.g., '*** **** **** 1234') for credit card numbers). Ensure the JSON structure remains unchanged.",
"provider": "openai-compatible",
"auth": {
"header": {
"Authorization": "Bearer <some-token>"
}
},
"options": {
"model": "qwen-plus",
"max_tokens": 1024,
"temperature": 1
},
"override": {
"endpoint": "https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```

View File

@@ -0,0 +1,136 @@
---
title: api-breaker
keywords:
- Apache APISIX
- API Gateway
- API Breaker
description: This document describes the information about the Apache APISIX api-breaker Plugin, you can use it to protect Upstream services.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `api-breaker` Plugin implements circuit breaker functionality to protect Upstream services.
:::note
Whenever the Upstream service responds with a status code from the configured `unhealthy.http_statuses` list for the configured `unhealthy.failures` number of times, the Upstream service will be considered unhealthy.
The request is then retried in 2, 4, 8, 16 ... seconds until the `max_breaker_sec`.
In an unhealthy state, if the Upstream service responds with a status code from the configured list `healthy.http_statuses` for `healthy.successes` times, the service is considered healthy again.
:::
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|-------------------------|----------------|----------|---------|-----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| break_response_code | integer | True | | [200, ..., 599] | HTTP error code to return when Upstream is unhealthy. |
| break_response_body | string | False | | | Body of the response message to return when Upstream is unhealthy. |
| break_response_headers | array[object] | False | | [{"key":"header_name","value":"can contain Nginx $var"}] | Headers of the response message to return when Upstream is unhealthy. Can only be configured when the `break_response_body` attribute is configured. The values can contain APISIX variables. For example, we can use `{"key":"X-Client-Addr","value":"$remote_addr:$remote_port"}`. |
| max_breaker_sec | integer | False | 300 | >=3 | Maximum time in seconds for circuit breaking. |
| unhealthy.http_statuses | array[integer] | False | [500] | [500, ..., 599] | Status codes of Upstream to be considered unhealthy. |
| unhealthy.failures | integer | False | 3 | >=1 | Number of failures within a certain period of time for the Upstream service to be considered unhealthy. |
| healthy.http_statuses | array[integer] | False | [200] | [200, ..., 499] | Status codes of Upstream to be considered healthy. |
| healthy.successes | integer | False | 3 | >=1 | Number of consecutive healthy requests for the Upstream service to be considered healthy. |
## Enable Plugin
The example below shows how you can configure the Plugin on a specific Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/1" \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"api-breaker": {
"break_response_code": 502,
"unhealthy": {
"http_statuses": [500, 503],
"failures": 3
},
"healthy": {
"http_statuses": [200],
"successes": 1
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"uri": "/hello"
}'
```
In this configuration, a response code of `500` or `503` three times within a certain period of time triggers the unhealthy status of the Upstream service. A response code of `200` restores its healthy status.
## Example usage
Once you have configured the Plugin as shown above, you can test it out by sending a request.
```shell
curl -i -X POST "http://127.0.0.1:9080/hello"
```
If the Upstream service responds with an unhealthy response code, you will receive the configured response code (`break_response_code`).
```shell
HTTP/1.1 502 Bad Gateway
...
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty</center>
</body>
</html>
```
## Delete Plugin
To remove the `api-breaker` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,180 @@
---
title: attach-consumer-label
keywords:
- Apache APISIX
- API Gateway
- API Consumer
description: This article describes the Apache APISIX attach-consumer-label plugin, which you can use to pass custom consumer labels to upstream services.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `attach-consumer-label` plugin attaches custom consumer-related labels, in addition to `X-Consumer-Username` and `X-Credential-Indentifier`, to authenticated requests, for upstream services to differentiate between consumers and implement additional logics.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|----------|--------|----------|---------|--------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| headers | object | True | | | Key-value pairs of consumer labels to be attached to request headers, where key is the request header name, such as `X-Consumer-Role`, and the value is a reference to the custom label key, such as `$role`. Note that the value should always start with a dollar sign (`$`). If a referenced consumer value is not configured on the consumer, the corresponding header will not be attached to the request. |
## Enable Plugin
The following example demonstrates how you can attach custom labels to request headers before authenticated requests are forwarded to upstream services. If the request is rejected, you should not see any consumer labels attached to request headers. If a certain label value is not configured on the consumer but referenced in the `attach-consumer-label` plugin, the corresponding header will also not be attached.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
Create a consumer `john` with custom labels:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"username": "john",
# highlight-start
"labels": {
// Annotate 1
"department": "devops",
// Annotate 2
"company": "api7"
}
# highlight-end
}'
```
❶ Label the `department` information for the consumer.
❷ Label the `company` information for the consumer.
Configure the `key-auth` credential for the consumer `john`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/john/credentials" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "cred-john-key-auth",
"plugins": {
"key-auth": {
"key": "john-key"
}
}
}'
```
Create a route enabling the `key-auth` and `attach-consumer-label` plugins:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "attach-consumer-label-route",
"uri": "/get",
"plugins": {
"key-auth": {},
# highlight-start
"attach-consumer-label": {
"headers": {
// Annotate 1
"X-Consumer-Department": "$department",
// Annotate 2
"X-Consumer-Company": "$company",
// Annotate 3
"X-Consumer-Role": "$role"
}
}
# highlight-end
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
❶ Attach the `department` consumer label value in the `X-Consumer-Department` request header.
❷ Attach the `company` consumer label value in the `X-Consumer-Company` request header.
❸ Attach the `role` consumer label value in the `X-Consumer-Role` request header. As the `role` label is not configured on the consumer, it is expected that the header will not appear in the request forwarded to the upstream service.
:::tip
The consumer label references must be prefixed by a dollar sign (`$`).
:::
To verify, send a request to the route with the valid credential:
```shell
curl -i "http://127.0.0.1:9080/get" -H 'apikey: john-key'
```
You should see an `HTTP/1.1 200 OK` response similar to the following:
```text
{
"args": {},
"headers": {
"Accept": "*/*",
"Apikey": "john-key",
"Host": "127.0.0.1",
# highlight-start
"X-Consumer-Username": "john",
"X-Credential-Indentifier": "cred-john-key-auth",
"X-Consumer-Company": "api7",
"X-Consumer-Department": "devops",
# highlight-end
"User-Agent": "curl/8.6.0",
"X-Amzn-Trace-Id": "Root=1-66e5107c-5bb3e24f2de5baf733aec1cc",
"X-Forwarded-Host": "127.0.0.1"
},
"origin": "192.168.65.1, 205.198.122.37",
"url": "http://127.0.0.1/get"
}
```
## Delete plugin
To remove the Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/attach-consumer-label-route" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"uri": "/get",
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```

View File

@@ -0,0 +1,263 @@
---
title: authz-casbin
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Authz Casbin
- authz-casbin
description: This document contains information about the Apache APISIX authz-casbin Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `authz-casbin` Plugin is an authorization Plugin based on [Lua Casbin](https://github.com/casbin/lua-casbin/). This Plugin supports powerful authorization scenarios based on various [access control models](https://casbin.org/docs/en/supported-models).
## Attributes
| Name | Type | Required | Description |
|-------------|--------|----------|----------------------------------------------------------------------------------------|
| model_path | string | True | Path of the Casbin model configuration file. |
| policy_path | string | True | Path of the Casbin policy file. |
| model | string | True | Casbin model configuration in text format. |
| policy | string | True | Casbin policy in text format. |
| username | string | True | Header in the request that will be used in the request to pass the username (subject). |
:::note
You must either specify the `model_path`, `policy_path`, and the `username` attributes or specify the `model`, `policy` and the `username` attributes in the Plugin configuration for it to be valid.
If you wish to use a global Casbin configuration, you can first specify `model` and `policy` attributes in the Plugin metadata and only the `username` attribute in the Plugin configuration. All Routes will use the Plugin configuration this way.
:::
## Metadata
| Name | Type | Required | Description |
|--------|--------|----------|--------------------------------------------|
| model | string | True | Casbin model configuration in text format. |
| policy | string | True | Casbin policy in text format. |
## Enable Plugin
You can enable the Plugin on a Route by either using the model/policy file paths or using the model/policy text in Plugin configuration/metadata.
### By using model/policy file paths
The example below shows setting up Casbin authentication from your model/policy configuration file:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"authz-casbin": {
"model_path": "/path/to/model.conf",
"policy_path": "/path/to/policy.csv",
"username": "user"
}
},
"upstream": {
"nodes": {
"127.0.0.1:1980": 1
},
"type": "roundrobin"
},
"uri": "/*"
}'
```
### By using model/policy text in Plugin configuration
The example below shows setting up Casbin authentication from your model/policy text in your Plugin configuration:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"authz-casbin": {
"model": "[request_definition]
r = sub, obj, act
[policy_definition]
p = sub, obj, act
[role_definition]
g = _, _
[policy_effect]
e = some(where (p.eft == allow))
[matchers]
m = (g(r.sub, p.sub) || keyMatch(r.sub, p.sub)) && keyMatch(r.obj, p.obj) && keyMatch(r.act, p.act)",
"policy": "p, *, /, GET
p, admin, *, *
g, alice, admin",
"username": "user"
}
},
"upstream": {
"nodes": {
"127.0.0.1:1980": 1
},
"type": "roundrobin"
},
"uri": "/*"
}'
```
### By using model/policy text in Plugin metadata
First, you need to send a `PUT` request to the Admin API to add the `model` and `policy` text to the Plugin metadata.
All Routes configured this way will use a single Casbin enforcer with the configured Plugin metadata. You can also update the model/policy in this way and the Plugin will automatically update to the new configuration.
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/authz-casbin -H "X-API-KEY: $admin_key" -i -X PUT -d '
{
"model": "[request_definition]
r = sub, obj, act
[policy_definition]
p = sub, obj, act
[role_definition]
g = _, _
[policy_effect]
e = some(where (p.eft == allow))
[matchers]
m = (g(r.sub, p.sub) || keyMatch(r.sub, p.sub)) && keyMatch(r.obj, p.obj) && keyMatch(r.act, p.act)",
"policy": "p, *, /, GET
p, admin, *, *
g, alice, admin"
}'
```
Once you have updated the Plugin metadata, you can add the Plugin to a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"authz-casbin": {
"username": "user"
}
},
"upstream": {
"nodes": {
"127.0.0.1:1980": 1
},
"type": "roundrobin"
},
"uri": "/*"
}'
```
:::note
The Plugin Route configuration has a higher precedence than the Plugin metadata configuration. If the model/policy configuration is present in the Plugin Route configuration, it is used instead of the metadata configuration.
:::
## Example usage
We define the example model as:
```conf
[request_definition]
r = sub, obj, act
[policy_definition]
p = sub, obj, act
[role_definition]
g = _, _
[policy_effect]
e = some(where (p.eft == allow))
[matchers]
m = (g(r.sub, p.sub) || keyMatch(r.sub, p.sub)) && keyMatch(r.obj, p.obj) && keyMatch(r.act, p.act)
```
And the example policy as:
```conf
p, *, /, GET
p, admin, *, *
g, alice, admin
```
See [examples](https://github.com/casbin/lua-casbin/tree/master/examples) for more policy and model configurations.
The above configuration will let anyone access the homepage (`/`) using a `GET` request while only users with admin permissions can access other pages and use other request methods.
So if we make a get request to the homepage:
```shell
curl -i http://127.0.0.1:9080/ -X GET
```
But if an unauthorized user tries to access any other page, they will get a 403 error:
```shell
curl -i http://127.0.0.1:9080/res -H 'user: bob' -X GET
HTTP/1.1 403 Forbidden
```
And only users with admin privileges can access the endpoints:
```shell
curl -i http://127.0.0.1:9080/res -H 'user: alice' -X GET
```
## Delete Plugin
To remove the `authz-casbin` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/*",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,118 @@
---
title: authz-casdoor
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Authz Casdoor
- authz-casdoor
description: This document contains information about the Apache APISIX authz-casdoor Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `authz-casdoor` Plugin can be used to add centralized authentication with [Casdoor](https://casdoor.org/).
## Attributes
| Name | Type | Required | Description |
|---------------|--------|----------|----------------------------------------------|
| endpoint_addr | string | True | URL of Casdoor. |
| client_id | string | True | Client ID in Casdoor. |
| client_secret | string | True | Client secret in Casdoor. |
| callback_url | string | True | Callback URL used to receive state and code. |
NOTE: `encrypt_fields = {"client_secret"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
:::info IMPORTANT
`endpoint_addr` and `callback_url` should not end with '/'.
:::
:::info IMPORTANT
The `callback_url` must belong to the URI of your Route. See the code snippet below for an example configuration.
:::
## Enable Plugin
You can enable the Plugin on a specific Route as shown below:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/1" -H "X-API-KEY: edd1c9f034335f136f87ad84b625c8f1" -X PUT -d '
{
"methods": ["GET"],
"uri": "/anything/*",
"plugins": {
"authz-casdoor": {
"endpoint_addr":"http://localhost:8000",
"callback_url":"http://localhost:9080/anything/callback",
"client_id":"7ceb9b7fda4a9061ec1c",
"client_secret":"3416238e1edf915eac08b8fe345b2b95cdba7e04"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
## Example usage
Once you have enabled the Plugin, a new user visiting this Route would first be processed by the `authz-casdoor` Plugin. They would be redirected to the login page of Casdoor.
After successfully logging in, Casdoor will redirect this user to the `callback_url` with GET parameters `code` and `state` specified. The Plugin will also request for an access token and confirm whether the user is really logged in. This process is only done once and subsequent requests are left uninterrupted.
Once this is done, the user is redirected to the original URL they wanted to visit.
## Delete Plugin
To remove the `authz-casdoor` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/anything/*",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```

View File

@@ -0,0 +1,241 @@
---
title: authz-keycloak
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Authz Keycloak
- authz-keycloak
description: This document contains information about the Apache APISIX authz-keycloak Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `authz-keycloak` Plugin can be used to add authentication with [Keycloak Identity Server](https://www.keycloak.org/).
:::tip
Although this Plugin was developed to work with Keycloak, it should work with any OAuth/OIDC and UMA compliant identity providers as well.
:::
Refer to [Authorization Services Guide](https://www.keycloak.org/docs/latest/authorization_services/) for more information on Keycloak.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|----------------------------------------------|---------------|----------|-----------------------------------------------|--------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| discovery | string | False | | https://host.domain/realms/foo/.well-known/uma2-configuration | URL to [discovery document](https://www.keycloak.org/docs/latest/authorization_services/index.html) of Keycloak Authorization Services. |
| token_endpoint | string | False | | https://host.domain/realms/foo/protocol/openid-connect/token | An OAuth2-compliant token endpoint that supports the `urn:ietf:params:oauth:grant-type:uma-ticket` grant type. If provided, overrides the value from discovery. |
| resource_registration_endpoint | string | False | | https://host.domain/realms/foo/authz/protection/resource_set | A UMA-compliant resource registration endpoint. If provided, overrides the value from discovery. |
| client_id | string | True | | | The identifier of the resource server to which the client is seeking access. |
| client_secret | string | False | | | The client secret, if required. You can use APISIX secret to store and reference this value. APISIX currently supports storing secrets in two ways. [Environment Variables and HashiCorp Vault](../terminology/secret.md) |
| grant_type | string | False | "urn:ietf:params:oauth:grant-type:uma-ticket" | ["urn:ietf:params:oauth:grant-type:uma-ticket"] | |
| policy_enforcement_mode | string | False | "ENFORCING" | ["ENFORCING", "PERMISSIVE"] | |
| permissions | array[string] | False | | | An array of strings, each representing a set of one or more resources and scopes the client is seeking access. |
| lazy_load_paths | boolean | False | false | | When set to true, dynamically resolves the request URI to resource(s) using the resource registration endpoint instead of the static permission. |
| http_method_as_scope | boolean | False | false | | When set to true, maps the HTTP request type to scope of the same name and adds to all requested permissions. |
| timeout | integer | False | 3000 | [1000, ...] | Timeout in ms for the HTTP connection with the Identity Server. |
| access_token_expires_in | integer | False | 300 | [1, ...] | Expiration time(s) of the access token. |
| access_token_expires_leeway | integer | False | 0 | [0, ...] | Expiration leeway(s) for access_token renewal. When set, the token will be renewed access_token_expires_leeway seconds before expiration. This avoids errors in cases where the access_token just expires when reaching the OAuth Resource Server. |
| refresh_token_expires_in | integer | False | 3600 | [1, ...] | The expiration time(s) of the refresh token. |
| refresh_token_expires_leeway | integer | False | 0 | [0, ...] | Expiration leeway(s) for refresh_token renewal. When set, the token will be renewed refresh_token_expires_leeway seconds before expiration. This avoids errors in cases where the refresh_token just expires when reaching the OAuth Resource Server. |
| ssl_verify | boolean | False | true | | When set to true, verifies if TLS certificate matches hostname. |
| cache_ttl_seconds | integer | False | 86400 (equivalent to 24h) | positive integer >= 1 | Maximum time in seconds up to which the Plugin caches discovery documents and tokens used by the Plugin to authenticate to Keycloak. |
| keepalive | boolean | False | true | | When set to true, enables HTTP keep-alive to keep connections open after use. Set to `true` if you are expecting a lot of requests to Keycloak. |
| keepalive_timeout | integer | False | 60000 | positive integer >= 1000 | Idle time after which the established HTTP connections will be closed. |
| keepalive_pool | integer | False | 5 | positive integer >= 1 | Maximum number of connections in the connection pool. |
| access_denied_redirect_uri | string | False | | [1, 2048] | URI to redirect the user to instead of returning an error message like `"error_description":"not_authorized"`. |
| password_grant_token_generation_incoming_uri | string | False | | /api/token | Set this to generate token using the password grant type. The Plugin will compare incoming request URI to this value. |
NOTE: `encrypt_fields = {"client_secret"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
### Discovery and endpoints
It is recommended to use the `discovery` attribute as the `authz-keycloak` Plugin can discover the Keycloak API endpoints from it.
If set, the `token_endpoint` and `resource_registration_endpoint` will override the values obtained from the discovery document.
### Client ID and secret
The Plugin needs the `client_id` attribute for identification and to specify the context in which to evaluate permissions when interacting with Keycloak.
If the `lazy_load_paths` attribute is set to true, then the Plugin additionally needs to obtain an access token for itself from Keycloak. In such cases, if the client access to Keycloak is confidential, you need to configure the `client_secret` attribute.
### Policy enforcement mode
The `policy_enforcement_mode` attribute specifies how policies are enforced when processing authorization requests sent to the server.
#### `ENFORCING` mode
Requests are denied by default even when there is no policy associated with a resource.
The `policy_enforcement_mode` is set to `ENFORCING` by default.
#### `PERMISSIVE` mode
Requests are allowed when there is no policy associated with a given resource.
### Permissions
When handling incoming requests, the Plugin can determine the permissions to check with Keycloak statically or dynamically from the properties of the request.
If the `lazy_load_paths` attribute is set to `false`, the permissions are taken from the `permissions` attribute. Each entry in `permissions` needs to be formatted as expected by the token endpoint's `permission` parameter. See [Obtaining Permissions](https://www.keycloak.org/docs/latest/authorization_services/index.html#_service_obtaining_permissions).
:::note
A valid permission can be a single resource or a resource paired with on or more scopes.
:::
If the `lazy_load_paths` attribute is set to `true`, the request URI is resolved to one or more resources configured in Keycloak using the resource registration endpoint. The resolved resources are used as the permissions to check.
:::note
This requires the Plugin to obtain a separate access token for itself from the token endpoint. So, make sure to set the `Service Accounts Enabled` option in the client settings in Keycloak.
Also make sure that the issued access token contains the `resource_access` claim with the `uma_protection` role to ensure that the Plugin is able to query resources through the Protection API.
:::
### Automatically mapping HTTP method to scope
The `http_method_as_scope` is often used together with `lazy_load_paths` but can also be used with a static permission list.
If the `http_method_as_scope` attribute is set to `true`, the Plugin maps the request's HTTP method to the scope with the same name. The scope is then added to every permission to check.
If the `lazy_load_paths` attribute is set to false, the Plugin adds the mapped scope to any of the static permissions configured in the `permissions` attribute—even if they contain on or more scopes already.
### Generating a token using `password` grant
To generate a token using `password` grant, you can set the value of the `password_grant_token_generation_incoming_uri` attribute.
If the incoming URI matches the configured attribute and the request method is POST, a token is generated using the `token_endpoint`.
You also need to add `application/x-www-form-urlencoded` as `Content-Type` header and `username` and `password` as parameters.
The example below shows a request if the `password_grant_token_generation_incoming_uri` is `/api/token`:
```shell
curl --location --request POST 'http://127.0.0.1:9080/api/token' \
--header 'Accept: application/json, text/plain, */*' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'username=<User_Name>' \
--data-urlencode 'password=<Password>'
```
## Enable Plugin
The example below shows how you can enable the `authz-keycloak` Plugin on a specific Route. `${realm}` represents the realm name in Keycloak.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/5 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/get",
"plugins": {
"authz-keycloak": {
"token_endpoint": "http://127.0.0.1:8090/realms/${realm}/protocol/openid-connect/token",
"permissions": ["resource name#scope name"],
"client_id": "Client ID"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:8080": 1
}
}
}'
```
## Example usage
Once you have enabled the Plugin on a Route you can use it.
First, you have to get the JWT token from Keycloak:
```shell
curl "http://<YOUR_KEYCLOAK_HOST>/realms/<YOUR_REALM>/protocol/openid-connect/token" \
-d "client_id=<YOUR_CLIENT_ID>" \
-d "client_secret=<YOUR_CLIENT_SECRET>" \
-d "username=<YOUR_USERNAME>" \
-d "password=<YOUR_PASSWORD>" \
-d "grant_type=password"
```
You should see a response similar to the following:
```text
{"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJoT3ludlBPY2d6Y3VWWnYtTU42bXZKMUczb0dOX2d6MFo3WFl6S2FSa1NBIn0.eyJleHAiOjE3MDMyOTAyNjAsImlhdCI6MTcwMzI4OTk2MCwianRpIjoiMjJhOGFmMzItNDM5Mi00Yzg3LThkM2UtZDkyNDVmZmNiYTNmIiwiaXNzIjoiaHR0cDovLzE5Mi4xNjguMS44Mzo4MDgwL3JlYWxtcy9xdWlja3N0YXJ0LXJlYWxtIiwiYXVkIjoiYWNjb3VudCIsInN1YiI6IjAyZWZlY2VlLTBmYTgtNDg1OS1iYmIwLTgyMGZmZDdjMWRmYSIsInR5cCI6IkJlYXJlciIsImF6cCI6ImFwaXNpeC1xdWlja3N0YXJ0LWNsaWVudCIsInNlc3Npb25fc3RhdGUiOiI1YzIzZjVkZC1hN2ZhLTRlMmItOWQxNC02MmI1YzYyNmU1NDYiLCJhY3IiOiIxIiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbImRlZmF1bHQtcm9sZXMtcXVpY2tzdGFydC1yZWFsbSIsIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJzaWQiOiI1YzIzZjVkZC1hN2ZhLTRlMmItOWQxNC02MmI1YzYyNmU1NDYiLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsInByZWZlcnJlZF91c2VybmFtZSI6InF1aWNrc3RhcnQtdXNlciJ9.WNZQiLRleqCxw-JS-MHkqXnX_BPA9i6fyVHqF8l-L-2QxcqTAwbIp7AYKX-z90CG6EdRXOizAEkQytB32eVWXaRkLeTYCI7wIrT8XSVTJle4F88ohuBOjDfRR61yFh5k8FXXdAyRzcR7tIeE2YUFkRqw1gCT_VEsUuXPqm2wTKOmZ8fRBf4T-rP4-ZJwPkHAWc_nG21TmLOBCSulzYqoC6Lc-OvX5AHde9cfRuXx-r2HhSYs4cXtvX-ijA715MY634CQdedheoGca5yzPsJWrAlBbCruN2rdb4u5bDxKU62pJoJpmAsR7d5qYpYVA6AsANDxHLk2-W5F7I_IxqR0YQ","expires_in":300,"refresh_expires_in":1800,"refresh_token":"eyJhbGciOiJIUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJjN2IwYmY4NC1kYjk0LTQ5YzctYWIyZC01NmU3ZDc1MmRkNDkifQ.eyJleHAiOjE3MDMyOTE3NjAsImlhdCI6MTcwMzI4OTk2MCwianRpIjoiYzcyZjAzMzctYmZhNS00MWEzLTlhYjEtZmJlNGY0NmZjMDgxIiwiaXNzIjoiaHR0cDovLzE5Mi4xNjguMS44Mzo4MDgwL3JlYWxtcy9xdWlja3N0YXJ0LXJlYWxtIiwiYXVkIjoiaHR0cDovLzE5Mi4xNjguMS44Mzo4MDgwL3JlYWxtcy9xdWlja3N0YXJ0LXJlYWxtIiwic3ViIjoiMDJlZmVjZWUtMGZhOC00ODU5LWJiYjAtODIwZmZkN2MxZGZhIiwidHlwIjoiUmVmcmVzaCIsImF6cCI6ImFwaXNpeC1xdWlja3N0YXJ0LWNsaWVudCIsInNlc3Npb25fc3RhdGUiOiI1YzIzZjVkZC1hN2ZhLTRlMmItOWQxNC02MmI1YzYyNmU1NDYiLCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJzaWQiOiI1YzIzZjVkZC1hN2ZhLTRlMmItOWQxNC02MmI1YzYyNmU1NDYifQ.7AH7ppbVOlkYc9CoJ7kLSlDUkmFuNga28Amugn2t724","token_type":"Bearer","not-before-policy":0,"session_state":"5c23f5dd-a7fa-4e2b-9d14-62b5c626e546","scope":"email profile"}
```
Now you can make requests with the access token:
```shell
curl http://127.0.0.1:9080/get -H 'Authorization: Bearer ${ACCESS_TOKEN}'
```
To learn more about how you can integrate authorization policies into your API workflows you can checkout the unit test [authz-keycloak.t](https://github.com/apache/apisix/blob/master/t/plugin/authz-keycloak.t).
Run the following Docker image and go to `http://localhost:8090` to view the associated policies for the unit tests.
```bash
docker run -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=123456 -p 8090:8080 sshniro/keycloak-apisix
```
The image below shows how the policies are configured in the Keycloak server:
![Keycloak policy design](https://raw.githubusercontent.com/apache/apisix/master/docs/assets/images/plugin/authz-keycloak.png)
## Delete Plugin
To remove the `authz-keycloak` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/5 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/get",
"plugins": {
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:8080": 1
}
}
}'
```
## Plugin roadmap
- Currently, the `authz-keycloak` Plugin requires you to define the resource name and the required scopes to enforce policies for a Route. Keycloak's official adapted (Java, Javascript) provides path matching by querying Keycloak paths dynamically and lazy loading the paths to identity resources. Upcoming releases of the Plugin will support this function.
- To support reading scope and configurations from the Keycloak JSON file.

View File

@@ -0,0 +1,217 @@
---
title: aws-lambda
keywords:
- Apache APISIX
- Plugin
- AWS Lambda
- aws-lambda
description: This document contains information about the Apache APISIX aws-lambda Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `aws-lambda` Plugin is used for integrating APISIX with [AWS Lambda](https://aws.amazon.com/lambda/) and [Amazon API Gateway](https://aws.amazon.com/api-gateway/) as a dynamic upstream to proxy all requests for a particular URI to the AWS Cloud.
When enabled, the Plugin terminates the ongoing request to the configured URI and initiates a new request to the AWS Lambda Gateway URI on behalf of the client with configured authorization details, request headers, body and parameters (all three passed from the original request). It returns the response with headers, status code and the body to the client that initiated the request with APISIX.
This Plugin supports authorization via AWS API key and AWS IAM secrets. The Plugin implements [AWS Signature Version 4 signing](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-signing.html) for IAM secrets.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|----------------------|---------|----------|---------|--------------|--------------------------------------------------------------------------------------------------------------------------------------------|
| function_uri | string | True | | | AWS API Gateway endpoint which triggers the lambda serverless function. |
| authorization | object | False | | | Authorization credentials to access the cloud function. |
| authorization.apikey | string | False | | | Generated API Key to authorize requests to the AWS Gateway endpoint. |
| authorization.iam | object | False | | | Used for AWS IAM role based authorization performed via AWS v4 request signing. See [IAM authorization schema](#iam-authorization-schema). |
| authorization.iam.accesskey | string | True | | Generated access key ID from AWS IAM console. |
| authorization.iam.secretkey | string | True | | Generated access key secret from AWS IAM console. |
| authorization.iam.aws_region | string | False | "us-east-1" | AWS region where the request is being sent. |
| authorization.iam.service | string | False | "execute-api" | The service that is receiving the request. For Amazon API gateway APIs, it should be set to `execute-api`. For Lambda function, it should be set to `lambda`. |
| timeout | integer | False | 3000 | [100,...] | Proxy request timeout in milliseconds. |
| ssl_verify | boolean | False | true | true/false | When set to `true` performs SSL verification. |
| keepalive | boolean | False | true | true/false | When set to `true` keeps the connection alive for reuse. |
| keepalive_pool | integer | False | 5 | [1,...] | Maximum number of requests that can be sent on this connection before closing it. |
| keepalive_timeout | integer | False | 60000 | [1000,...] | Time is ms for connection to remain idle without closing. |
## Enable Plugin
The example below shows how you can configure the Plugin on a specific Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"aws-lambda": {
"function_uri": "https://x9w6z07gb9.execute-api.us-east-1.amazonaws.com/default/test-apisix",
"authorization": {
"apikey": "<Generated API Key from aws console>"
},
"ssl_verify":false
}
},
"uri": "/aws"
}'
```
Now, any requests (HTTP/1.1, HTTPS, HTTP2) to the endpoint `/aws` will invoke the configured AWS Functions URI and the response will be proxied back to the client.
In the example below, AWS Lambda takes in name from the query and returns a message "Hello $name":
```shell
curl -i -XGET localhost:9080/aws\?name=APISIX
```
```shell
HTTP/1.1 200 OK
Content-Type: application/json
Connection: keep-alive
Date: Sat, 27 Nov 2021 13:08:27 GMT
x-amz-apigw-id: JdwXuEVxIAMFtKw=
x-amzn-RequestId: 471289ab-d3b7-4819-9e1a-cb59cac611e0
Content-Length: 16
X-Amzn-Trace-Id: Root=1-61a22dca-600c552d1c05fec747fd6db0;Sampled=0
Server: APISIX/2.10.2
"Hello, APISIX!"
```
Another example of a request where the client communicates with APISIX via HTTP/2 is shown below. Before proceeding, make sure you have configured `enable_http2: true` in your configuration file `config.yaml` for port `9081` and reloaded APISIX. See [`config.yaml.example`](https://github.com/apache/apisix/blob/master/conf/config.yaml.example) for the example configuration.
```shell
curl -i -XGET --http2 --http2-prior-knowledge localhost:9081/aws\?name=APISIX
```
```shell
HTTP/2 200
content-type: application/json
content-length: 16
x-amz-apigw-id: JdwulHHrIAMFoFg=
date: Sat, 27 Nov 2021 13:10:53 GMT
x-amzn-trace-id: Root=1-61a22e5d-342eb64077dc9877644860dd;Sampled=0
x-amzn-requestid: a2c2b799-ecc6-44ec-b586-38c0e3b11fe4
server: APISIX/2.10.2
"Hello, APISIX!"
```
Similarly, the function can be triggered via AWS API Gateway by using AWS IAM permissions for authorization. The Plugin includes authentication signatures in HTTP calls via AWS v4 request signing. The example below shows this method:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"aws-lambda": {
"function_uri": "https://ajycz5e0v9.execute-api.us-east-1.amazonaws.com/default/test-apisix",
"authorization": {
"iam": {
"accesskey": "<access key>",
"secretkey": "<access key secret>"
}
},
"ssl_verify": false
}
},
"uri": "/aws"
}'
```
:::note
This approach assumes that you have already an IAM user with programmatic access enabled with the required permissions (`AmazonAPIGatewayInvokeFullAccess`) to access the endpoint.
:::
### Configuring path forwarding
The `aws-lambda` Plugin also supports URL path forwarding while proxying requests to the AWS upstream. Extensions to the base request path gets appended to the `function_uri` specified in the Plugin configuration.
:::info IMPORTANT
The `uri` configured on a Route must end with `*` for this feature to work properly. APISIX Routes are matched strictly and the `*` implies that any subpath to this URI would be matched to the same Route.
:::
The example below configures this feature:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"aws-lambda": {
"function_uri": "https://x9w6z07gb9.execute-api.us-east-1.amazonaws.com",
"authorization": {
"apikey": "<Generate API key>"
},
"ssl_verify":false
}
},
"uri": "/aws/*"
}'
```
Now, any requests to the path `aws/default/test-apisix` will invoke the AWS Lambda Function and the added path is forwarded:
```shell
curl -i -XGET http://127.0.0.1:9080/aws/default/test-apisix\?name\=APISIX
```
```shell
HTTP/1.1 200 OK
Content-Type: application/json
Connection: keep-alive
Date: Wed, 01 Dec 2021 14:23:27 GMT
X-Amzn-Trace-Id: Root=1-61a7855f-0addc03e0cf54ddc683de505;Sampled=0
x-amzn-RequestId: f5f4e197-9cdd-49f9-9b41-48f0d269885b
Content-Length: 16
x-amz-apigw-id: JrHG8GC4IAMFaGA=
Server: APISIX/2.11.0
"Hello, APISIX!"
```
## Delete Plugin
To remove the `aws-lambda` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/aws",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,199 @@
---
title: azure-functions
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Azure Functions
- azure-functions
description: This document contains information about the Apache APISIX azure-functions Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `azure-functions` Plugin is used to integrate APISIX with [Azure Serverless Function](https://azure.microsoft.com/en-in/services/functions/) as a dynamic upstream to proxy all requests for a particular URI to the Microsoft Azure Cloud.
When enabled, the Plugin terminates the ongoing request to the configured URI and initiates a new request to Azure Functions on behalf of the client with configured authorization details, request headers, body and parameters (all three passed from the original request). It returns back the response with headers, status code and the body to the client that initiated the request with APISIX.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|------------------------|---------|----------|---------|--------------|---------------------------------------------------------------------------------------------------------------------------------------|
| function_uri | string | True | | | Azure FunctionS endpoint which triggers the serverless function. For example, `http://test-apisix.azurewebsites.net/api/HttpTrigger`. |
| authorization | object | False | | | Authorization credentials to access Azure Functions. |
| authorization.apikey | string | False | | | Generated API key to authorize requests. |
| authorization.clientid | string | False | | | Azure AD client ID to authorize requests. |
| timeout | integer | False | 3000 | [100,...] | Proxy request timeout in milliseconds. |
| ssl_verify | boolean | False | true | true/false | When set to `true` performs SSL verification. |
| keepalive | boolean | False | true | true/false | When set to `true` keeps the connection alive for reuse. |
| keepalive_pool | integer | False | 5 | [1,...] | Maximum number of requests that can be sent on this connection before closing it. |
| keepalive_timeout | integer | False | 60000 | [1000,...] | Time is ms for connection to remain idle without closing. |
## Metadata
| Name | Type | Required | Default | Description |
|-----------------|--------|----------|---------|----------------------------------------------------------------------|
| master_apikey | string | False | "" | API Key secret that could be used to access the Azure Functions URI. |
| master_clientid | string | False | "" | Azure AD client ID that could be used to authorize the function URI. |
Metadata can be used in the `azure-functions` Plugin for an authorization fallback. If there are no authorization details in the Plugin's attributes, the `master_apikey` and `master_clientid` configured in the metadata is used.
The relative order priority is as follows:
1. Plugin looks for `x-functions-key` or `x-functions-clientid` key inside the header from the request to APISIX.
2. If not found, the Plugin checks the configured attributes for authorization details. If present, it adds the respective header to the request sent to the Azure Functions.
3. If authorization details are not configured in the Plugin's attributes, APISIX fetches the metadata and uses the master keys.
To add a new master API key, you can make a request to `/apisix/admin/plugin_metadata` with the required metadata as shown below:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/azure-functions -H "X-API-KEY: $admin_key" -X PUT -d '
{
"master_apikey" : "<Your Azure master access key>"
}'
```
## Enable Plugin
You can configure the Plugin on a specific Route as shown below assuming that you already have your Azure Functions up and running:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"azure-functions": {
"function_uri": "http://test-apisix.azurewebsites.net/api/HttpTrigger",
"authorization": {
"apikey": "<Generated API key to access the Azure-Function>"
}
}
},
"uri": "/azure"
}'
```
Now, any requests (HTTP/1.1, HTTPS, HTTP2) to the endpoint `/azure` will invoke the configured Azure Functions URI and the response will be proxied back to the client.
In the example below, the Azure Function takes in name from the query and returns a message "Hello $name":
```shell
curl -i -XGET http://localhost:9080/azure\?name=APISIX
```
```shell
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Request-Context: appId=cid-v1:38aae829-293b-43c2-82c6-fa94aec0a071
Date: Wed, 17 Nov 2021 14:46:55 GMT
Server: APISIX/2.10.2
Hello, APISIX
```
Another example of a request where the client communicates with APISIX via HTTP/2 is shown below. Before proceeding, make sure you have configured `enable_http2: true` in your configuration file `config.yaml` for port `9081` and reloaded APISIX. See [`config.yaml.example`](https://github.com/apache/apisix/blob/master/conf/config.yaml.example) for the example configuration.
```shell
curl -i -XGET --http2 --http2-prior-knowledge http://localhost:9081/azure\?name=APISIX
```
```shell
HTTP/2 200
content-type: text/plain; charset=utf-8
request-context: appId=cid-v1:38aae829-293b-43c2-82c6-fa94aec0a071
date: Wed, 17 Nov 2021 14:54:07 GMT
server: APISIX/2.10.2
Hello, APISIX
```
### Configuring path forwarding
The `azure-functions` Plugins also supports URL path forwarding while proxying requests to the Azure Functions upstream. Extensions to the base request path gets appended to the `function_uri` specified in the Plugin configuration.
:::info IMPORTANT
The `uri` configured on a Route must end with `*` for this feature to work properly. APISIX Routes are matched strictly and the `*` implies that any subpath to this URI would be matched to the same Route.
:::
The example below configures this feature:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"azure-functions": {
"function_uri": "http://app-bisakh.azurewebsites.net/api",
"authorization": {
"apikey": "<Generated API key to access the Azure-Function>"
}
}
},
"uri": "/azure/*"
}'
```
Now, any requests to the path `azure/HttpTrigger1` will invoke the Azure Function and the added path is forwarded:
```shell
curl -i -XGET http://127.0.0.1:9080/azure/HttpTrigger1\?name\=APISIX\
```
```shell
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Date: Wed, 01 Dec 2021 14:19:53 GMT
Request-Context: appId=cid-v1:4d4b6221-07f1-4e1a-9ea0-b86a5d533a94
Server: APISIX/2.11.0
Hello, APISIX
```
## Delete Plugin
To remove the `azure-functions` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/azure",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,514 @@
---
title: basic-auth
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Basic Auth
- basic-auth
description: The basic-auth Plugin adds basic access authentication for Consumers to authenticate themselves before being able to access Upstream resources.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/basic-auth" />
</head>
## Description
The `basic-auth` Plugin adds [basic access authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) for [Consumers](../terminology/consumer.md) to authenticate themselves before being able to access Upstream resources.
When a Consumer is successfully authenticated, APISIX adds additional headers, such as `X-Consumer-Username`, `X-Credential-Indentifier`, and other Consumer custom headers if configured, to the request, before proxying it to the Upstream service. The Upstream service will be able to differentiate between consumers and implement additional logics as needed. If any of these values is not available, the corresponding header will not be added.
## Attributes
For Consumer/Credentials:
| Name | Type | Required | Description |
|----------|--------|----------|------------------------------------------------------------------------------------------------------------------------|
| username | string | True | Unique basic auth username for a consumer. |
| password | string | True | Basic auth password for the consumer. |
NOTE: `encrypt_fields = {"password"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
For Route:
| Name | Type | Required | Default | Description |
|------------------|---------|----------|---------|------------------------------------------------------------------------|
| hide_credentials | boolean | False | false | If true, do not pass the authorization request header to Upstream services. |
| anonymous_consumer | boolean | False | false | Anonymous Consumer name. If configured, allow anonymous users to bypass the authentication. |
## Examples
The examples below demonstrate how you can work with the `basic-auth` Plugin for different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Implement Basic Authentication on Route
The following example demonstrates how to implement basic authentication on a Route.
Create a Consumer `johndoe`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "johndoe"
}'
```
Create `basic-auth` Credential for the consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/johndoe/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-basic-auth",
"plugins": {
"basic-auth": {
"username": "johndoe",
"password": "john-key"
}
}
}'
```
Create a Route with `basic-auth`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "basic-auth-route",
"uri": "/anything",
"plugins": {
"basic-auth": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
#### Verify with a Valid Key
Send a request to with the valid key:
```shell
curl -i "http://127.0.0.1:9080/anything" -u johndoe:john-key
```
You should see an `HTTP/1.1 200 OK` response similar to the following:
```json
{
"args": {},
"headers": {
"Accept": "*/*",
"Apikey": "john-key",
"Authorization": "Basic am9obmRvZTpqb2huLWtleQ==",
"Host": "127.0.0.1",
"User-Agent": "curl/8.6.0",
"X-Amzn-Trace-Id": "Root=1-66e5107c-5bb3e24f2de5baf733aec1cc",
"X-Consumer-Username": "john",
"X-Credential-Indentifier": "cred-john-basic-auth",
"X-Forwarded-Host": "127.0.0.1"
},
"origin": "192.168.65.1, 205.198.122.37",
"url": "http://127.0.0.1/get"
}
```
#### Verify with an Invalid Key
Send a request with an invalid key:
```shell
curl -i "http://127.0.0.1:9080/anything" -u johndoe:invalid-key
```
You should see an `HTTP/1.1 401 Unauthorized` response with the following:
```text
{"message":"Invalid user authorization"}
```
#### Verify without a Key
Send a request to without a key:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should see an `HTTP/1.1 401 Unauthorized` response with the following:
```text
{"message":"Missing authorization in request"}
```
### Hide Authentication Information From Upstream
The following example demonstrates how to prevent the key from being sent to the Upstream services by configuring `hide_credentials`. In APISIX, the authentication key is forwarded to the Upstream services by default, which might lead to security risks in some circumstances and you should consider updating `hide_credentials`.
Create a Consumer `johndoe`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "johndoe"
}'
```
Create `basic-auth` Credential for the consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/johndoe/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-basic-auth",
"plugins": {
"basic-auth": {
"username": "johndoe",
"password": "john-key"
}
}
}'
```
#### Without Hiding Credentials
Create a Route with `basic-auth` and configure `hide_credentials` to `false`, which is the default configuration:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "basic-auth-route",
"uri": "/anything",
"plugins": {
"basic-auth": {
"hide_credentials": false
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request with the valid key:
```shell
curl -i "http://127.0.0.1:9080/anything" -u johndoe:john-key
```
You should see an `HTTP/1.1 200 OK` response with the following:
```json
{
"args": {},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Authorization": "Basic am9obmRvZTpqb2huLWtleQ==",
"Host": "127.0.0.1",
"User-Agent": "curl/8.6.0",
"X-Amzn-Trace-Id": "Root=1-66cc2195-22bd5f401b13480e63c498c6",
"X-Consumer-Username": "john",
"X-Credential-Indentifier": "cred-john-basic-auth",
"X-Forwarded-Host": "127.0.0.1"
},
"json": null,
"method": "GET",
"origin": "192.168.65.1, 43.228.226.23",
"url": "http://127.0.0.1/anything"
}
```
Note that the credentials are visible to the Upstream service in base64-encoded format.
:::tip
You can also pass the base64-encoded credentials in the request using the `Authorization` header as such:
```shell
curl -i "http://127.0.0.1:9080/anything" -H "Authorization: Basic am9obmRvZTpqb2huLWtleQ=="
```
:::
#### Hide Credentials
Update the plugin's `hide_credentials` to `true`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/basic-auth-route" -X PATCH \
-H "X-API-KEY: ${admin_key}" \
-d '{
"plugins": {
"basic-auth": {
"hide_credentials": true
}
}
}'
```
Send a request with the valid key:
```shell
curl -i "http://127.0.0.1:9080/anything" -u johndoe:john-key
```
You should see an `HTTP/1.1 200 OK` response with the following:
```json
{
"args": {},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Host": "127.0.0.1",
"User-Agent": "curl/8.6.0",
"X-Amzn-Trace-Id": "Root=1-66cc21a7-4f6ac87946e25f325167d53a",
"X-Consumer-Username": "john",
"X-Credential-Indentifier": "cred-john-basic-auth",
"X-Forwarded-Host": "127.0.0.1"
},
"json": null,
"method": "GET",
"origin": "192.168.65.1, 43.228.226.23",
"url": "http://127.0.0.1/anything"
}
```
Note that the credentials are no longer visible to the Upstream service.
### Add Consumer Custom ID to Header
The following example demonstrates how you can attach a Consumer custom ID to authenticated request in the `Consumer-Custom-Id` header, which can be used to implement additional logics as needed.
Create a Consumer `johndoe` with a custom ID label:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "johndoe",
"labels": {
"custom_id": "495aec6a"
}
}'
```
Create `basic-auth` Credential for the consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/johndoe/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-basic-auth",
"plugins": {
"basic-auth": {
"username": "johndoe",
"password": "john-key"
}
}
}'
```
Create a Route with `basic-auth`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "basic-auth-route",
"uri": "/anything",
"plugins": {
"basic-auth": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
To verify, send a request to the Route with the valid key:
```shell
curl -i "http://127.0.0.1:9080/anything" -u johndoe:john-key
```
You should see an `HTTP/1.1 200 OK` response with the `X-Consumer-Custom-Id` similar to the following:
```json
{
"args": {},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Authorization": "Basic am9obmRvZTpqb2huLWtleQ==",
"Host": "127.0.0.1",
"User-Agent": "curl/8.6.0",
"X-Amzn-Trace-Id": "Root=1-66ea8d64-33df89052ae198a706e18c2a",
"X-Consumer-Username": "johndoe",
"X-Credential-Identifier": "cred-john-basic-auth",
"X-Consumer-Custom-Id": "495aec6a",
"X-Forwarded-Host": "127.0.0.1"
},
"json": null,
"method": "GET",
"origin": "192.168.65.1, 205.198.122.37",
"url": "http://127.0.0.1/anything"
}
```
### Rate Limit with Anonymous Consumer
The following example demonstrates how you can configure different rate limiting policies by regular and anonymous consumers, where the anonymous Consumer does not need to authenticate and has less quotas.
Create a regular Consumer `johndoe` and configure the `limit-count` Plugin to allow for a quota of 3 within a 30-second window:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "johndoe",
"plugins": {
"limit-count": {
"count": 3,
"time_window": 30,
"rejected_code": 429
}
}
}'
```
Create the `basic-auth` Credential for the Consumer `johndoe`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/johndoe/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-basic-auth",
"plugins": {
"basic-auth": {
"username": "johndoe",
"password": "john-key"
}
}
}'
```
Create an anonymous user `anonymous` and configure the `limit-count` Plugin to allow for a quota of 1 within a 30-second window:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "anonymous",
"plugins": {
"limit-count": {
"count": 1,
"time_window": 30,
"rejected_code": 429
}
}
}'
```
Create a Route and configure the `basic-auth` Plugin to accept anonymous Consumer `anonymous` from bypassing the authentication:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "basic-auth-route",
"uri": "/anything",
"plugins": {
"basic-auth": {
"anonymous_consumer": "anonymous"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
To verify, send five consecutive requests with `john`'s key:
```shell
resp=$(seq 5 | xargs -I{} curl "http://127.0.0.1:9080/anything" -u johndoe:john-key -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200": $count_200, "429": $count_429
```
You should see the following response, showing that out of the 5 requests, 3 requests were successful (status code 200) while the others were rejected (status code 429).
```text
200: 3, 429: 2
```
Send five anonymous requests:
```shell
resp=$(seq 5 | xargs -I{} curl "http://127.0.0.1:9080/anything" -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200": $count_200, "429": $count_429
```
You should see the following response, showing that only one request was successful:
```text
200: 1, 429: 4
```

View File

@@ -0,0 +1,225 @@
---
title: batch-requests
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Batch Requests
description: This document contains information about the Apache APISIX batch-request Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
After enabling the batch-requests plugin, users can assemble multiple requests into one request and send them to the gateway. The gateway will parse the corresponding requests from the request body and then individually encapsulate them into separate requests. Instead of the user initiating multiple HTTP requests to the gateway, the gateway will use the HTTP pipeline method, go through several stages such as route matching, forwarding to the corresponding upstream, and then return the combined results to the client after merging.
![batch-request](https://static.apiseven.com/uploads/2023/06/27/ATzEuOn4_batch-request.png)
In cases where the client needs to access multiple APIs, this will significantly improve performance.
:::note
The request headers in the users original request (except for headers starting with “Content-”, such as “Content-Type”) will be assigned to each request in the HTTP pipeline. Therefore, to the gateway, these HTTP pipeline requests sent to itself are no different from external requests initiated directly by users. They can only access pre-configured routes and will undergo a complete authentication process, so there are no security issues.
If the request headers of the original request conflict with those configured in the plugin, the request headers configured in the plugin will take precedence (except for the real_ip_header specified in the configuration file).
:::
## Attributes
None.
## API
This plugin adds `/apisix/batch-requests` as an endpoint.
:::note
You may need to use the [public-api](public-api.md) plugin to expose this endpoint.
:::
## Enable Plugin
You can enable the `batch-requests` Plugin by adding it to your configuration file (`conf/config.yaml`):
```yaml title="conf/config.yaml"
plugins:
- ...
- batch-requests
```
## Configuration
By default, the maximum body size that can be sent to `/apisix/batch-requests` can't be larger than 1 MiB. You can change this configuration of the Plugin through the endpoint `apisix/admin/plugin_metadata/batch-requests`:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/batch-requests -H "X-API-KEY: $admin_key" -X PUT -d '
{
"max_body_size": 4194304
}'
```
## Metadata
| Name | Type | Required | Default | Valid values | Description |
| ------------- | ------- | -------- | ------- | ------------ | ------------------------------------------ |
| max_body_size | integer | True | 1048576 | [1, ...] | Maximum size of the request body in bytes. |
## Request and response format
This plugin will create an API endpoint in APISIX to handle batch requests.
### Request
| Name | Type | Required | Default | Description |
| -------- |------------------------------------| -------- | ------- | ----------------------------- |
| query | object | False | | Query string for the request. |
| headers | object | False | | Headers for all the requests. |
| timeout | integer | False | 30000 | Timeout in ms. |
| pipeline | array[[HttpRequest](#httprequest)] | True | | Details of the request. |
#### HttpRequest
| Name | Type | Required | Default | Valid | Description |
| ---------- | ------- | -------- | ------- | -------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
| version | string | False | 1.1 | [1.0, 1.1] | HTTP version. |
| method | string | False | GET | ["GET", "POST", "PUT", "DELETE", "PATCH", "HEAD", "OPTIONS", "CONNECT", "TRACE"] | HTTP method. |
| query | object | False | | | Query string for the request. If set, overrides the value of the global query string. |
| headers | object | False | | | Headers for the request. If set, overrides the value of the global query string. |
| path | string | True | | | Path of the HTTP request. |
| body | string | False | | | Body of the HTTP request. |
| ssl_verify | boolean | False | false | | Set to verify if the SSL certs matches the hostname. |
### Response
The response is an array of [HttpResponses](#httpresponse).
#### HttpResponse
| Name | Type | Description |
| ------- | ------- | ---------------------- |
| status | integer | HTTP status code. |
| reason | string | HTTP reason-phrase. |
| body | string | HTTP response body. |
| headers | object | HTTP response headers. |
## Specifying a custom URI
You can specify a custom URI with the [public-api](public-api.md) Plugin.
You can set the URI you want when creating the Route and change the configuration of the public-api Plugin:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/br -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/batch-requests",
"plugins": {
"public-api": {
"uri": "/apisix/batch-requests"
}
}
}'
```
## Example usage
First, you need to setup a Route to the batch request API. We will use the [public-api](public-api.md) Plugin for this:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/br -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/apisix/batch-requests",
"plugins": {
"public-api": {}
}
}'
```
Now you can make a request to the batch request API (`/apisix/batch-requests`):
```shell
curl --location --request POST 'http://127.0.0.1:9080/apisix/batch-requests' \
--header 'Content-Type: application/json' \
--data '{
"headers": {
"Content-Type": "application/json",
"admin-jwt":"xxxx"
},
"timeout": 500,
"pipeline": [
{
"method": "POST",
"path": "/community.GiftSrv/GetGifts",
"body": "test"
},
{
"method": "POST",
"path": "/community.GiftSrv/GetGifts",
"body": "test2"
}
]
}'
```
This will give a response:
```json
[
{
"status": 200,
"reason": "OK",
"body": "{\"ret\":500,\"msg\":\"error\",\"game_info\":null,\"gift\":[],\"to_gets\":0,\"get_all_msg\":\"\"}",
"headers": {
"Connection": "keep-alive",
"Date": "Sat, 11 Apr 2020 17:53:20 GMT",
"Content-Type": "application/json",
"Content-Length": "81",
"Server": "APISIX web server"
}
},
{
"status": 200,
"reason": "OK",
"body": "{\"ret\":500,\"msg\":\"error\",\"game_info\":null,\"gift\":[],\"to_gets\":0,\"get_all_msg\":\"\"}",
"headers": {
"Connection": "keep-alive",
"Date": "Sat, 11 Apr 2020 17:53:20 GMT",
"Content-Type": "application/json",
"Content-Length": "81",
"Server": "APISIX web server"
}
}
]
```
## Delete Plugin
You can remove `batch-requests` from your list of Plugins in your configuration file (`conf/config.yaml`).

View File

@@ -0,0 +1,609 @@
---
title: body-transformer
keywords:
- Apache APISIX
- API Gateway
- Plugin
- BODY TRANSFORMER
- body-transformer
description: The body-transformer Plugin performs template-based transformations to transform the request and/or response bodies from one format to another, for example, from JSON to JSON, JSON to HTML, or XML to YAML.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/body-transformer" />
</head>
## Description
The `body-transformer` Plugin performs template-based transformations to transform the request and/or response bodies from one format to another, for example, from JSON to JSON, JSON to HTML, or XML to YAML.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
| ------------- | ------- | -------- | ------- | ------------ | ------------------------------------------ |
| `request` | object | False | | | Request body transformation configuration. |
| `request.input_format` | string | False | | [`xml`,`json`,`encoded`,`args`,`plain`,`multipart`] | Request body original media type. If unspecified, the value would be determined by the `Content-Type` header to apply the corresponding decoder. The `xml` option corresponds to `text/xml` media type. The `json` option corresponds to `application/json` media type. The `encoded` option corresponds to `application/x-www-form-urlencoded` media type. The `args` option corresponds to GET requests. The `plain` option corresponds to `text/plain` media type. The `multipart` option corresponds to `multipart/related` media type. If the media type is neither type, the value would be left unset and the transformation template will be directly applied. |
| `request.template` | string | True | | | Request body transformation template. The template uses [lua-resty-template](https://github.com/bungle/lua-resty-template) syntax. See the [template syntax](https://github.com/bungle/lua-resty-template#template-syntax) for more details. You can also use auxiliary functions `_escape_json()` and `_escape_xml()` to escape special characters such as double quotes, `_body` to access request body, and `_ctx` to access context variables. |
| `request.template_is_base64` | boolean | False | false | | Set to true if the template is base64 encoded. |
| `response` | object | False | | | Response body transformation configuration. |
| `response.input_format` | string | False | | [`xml`,`json`] | Response body original media type. If unspecified, the value would be determined by the `Content-Type` header to apply the corresponding decoder. If the media type is neither `xml` nor `json`, the value would be left unset and the transformation template will be directly applied. |
| `response.template` | string | True | | | Response body transformation template. |
| `response.template_is_base64` | boolean | False | false | | Set to true if the template is base64 encoded. |
## Examples
The examples below demonstrate how you can configure `body-transformer` for different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
The transformation template uses [lua-resty-template](https://github.com/bungle/lua-resty-template) syntax. See the [template syntax](https://github.com/bungle/lua-resty-template#template-syntax) to learn more.
You can also use auxiliary functions `_escape_json()` and `_escape_xml()` to escape special characters such as double quotes, `_body` to access request body, and `_ctx` to access context variables.
In all cases, you should ensure that the transformation template is a valid JSON string.
### Transform between JSON and XML SOAP
The following example demonstrates how to transform the request body from JSON to XML and the response body from XML to JSON when working with a SOAP Upstream service.
Start the sample SOAP service:
```shell
cd /tmp
git clone https://github.com/spring-guides/gs-soap-service.git
cd gs-soap-service/complete
./mvnw spring-boot:run
```
Create the request and response transformation templates:
```shell
req_template=$(cat <<EOF | awk '{gsub(/"/,"\\\"");};1' | awk '{$1=$1};1' | tr -d '\r\n'
<?xml version="1.0"?>
<soap-env:Envelope xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/">
<soap-env:Body>
<ns0:getCountryRequest xmlns:ns0="http://spring.io/guides/gs-producing-web-service">
<ns0:name>{{_escape_xml(name)}}</ns0:name>
</ns0:getCountryRequest>
</soap-env:Body>
</soap-env:Envelope>
EOF
)
rsp_template=$(cat <<EOF | awk '{gsub(/"/,"\\\"");};1' | awk '{$1=$1};1' | tr -d '\r\n'
{% if Envelope.Body.Fault == nil then %}
{
"status":"{{_ctx.var.status}}",
"currency":"{{Envelope.Body.getCountryResponse.country.currency}}",
"population":{{Envelope.Body.getCountryResponse.country.population}},
"capital":"{{Envelope.Body.getCountryResponse.country.capital}}",
"name":"{{Envelope.Body.getCountryResponse.country.name}}"
}
{% else %}
{
"message":{*_escape_json(Envelope.Body.Fault.faultstring[1])*},
"code":"{{Envelope.Body.Fault.faultcode}}"
{% if Envelope.Body.Fault.faultactor ~= nil then %}
, "actor":"{{Envelope.Body.Fault.faultactor}}"
{% end %}
}
{% end %}
EOF
)
```
`awk` and `tr` are used above to manipulate the template such that the template would be a valid JSON string.
Create a Route with `body-transformer` using the templates created previously. In the Plugin, set the request input format as JSON, the response input format as XML, and the `Content-Type` header to `text/xml` for the Upstream service to respond properly:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "body-transformer-route",
"methods": ["POST"],
"uri": "/ws",
"plugins": {
"body-transformer": {
"request": {
"template": "'"$req_template"'",
"input_format": "json"
},
"response": {
"template": "'"$rsp_template"'",
"input_format": "xml"
}
},
"proxy-rewrite": {
"headers": {
"set": {
"Content-Type": "text/xml"
}
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"localhost:8080": 1
}
}
}'
```
:::tip
If it is cumbersome to adjust complex text files to be valid transformation templates, you can use the base64 utility to encode the files, such as the following:
```json
"body-transformer": {
"request": {
"template": "'"$(base64 -w0 /path/to/request_template_file)"'"
},
"response": {
"template": "'"$(base64 -w0 /path/to/response_template_file)"'"
}
}
```
:::
Send a request with a valid JSON body:
```shell
curl "http://127.0.0.1:9080/ws" -X POST -d '{"name": "Spain"}'
```
The JSON body sent in the request will be transformed into XML before being forwarded to the Upstream SOAP service, and the response body will be transformed back from XML to JSON.
You should see a response similar to the following:
```json
{
"status": "200",
"currency": "EUR",
"population": 46704314,
"capital": "Madrid",
"name": "Spain"
}
```
### Modify Request Body
The following example demonstrates how to dynamically modify the request body.
Create a Route with `body-transformer`, in which the template appends the word `world` to the `name` and adds `10` to the `age` to set them as values to `foo` and `bar` respectively:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "body-transformer-route",
"uri": "/anything",
"plugins": {
"body-transformer": {
"request": {
"template": "{\"foo\":\"{{name .. \" world\"}}\",\"bar\":{{age+10}}}"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route:
```shell
curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-d '{"name":"hello","age":20}' \
-i
```
You should see a response of the following:
```json
{
"args": {},
"data": "{\"foo\":\"hello world\",\"bar\":30}",
...
"json": {
"bar": 30,
"foo": "hello world"
},
"method": "POST",
...
}
```
### Generate Request Body Using Variables
The following example demonstrates how to generate request body dynamically using the `ctx` context variables.
Create a Route with `body-transformer`, in which the template accesses the request argument using the [Nginx variable](https://nginx.org/en/docs/http/ngx_http_core_module.html) `arg_name`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "body-transformer-route",
"uri": "/anything",
"plugins": {
"body-transformer": {
"request": {
"template": "{\"foo\":\"{{_ctx.var.arg_name .. \" world\"}}\"}"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route with `name` argument:
```shell
curl -i "http://127.0.0.1:9080/anything?name=hello"
```
You should see a response like this:
```json
{
"args": {
"name": "hello"
},
...,
"json": {
"foo": "hello world"
},
...
}
```
### Transform Body from YAML to JSON
The following example demonstrates how to transform request body from YAML to JSON.
Create the request transformation template:
```shell
req_template=$(cat <<EOF | awk '{gsub(/"/,"\\\"");};1'
{%
local yaml = require("tinyyaml")
local body = yaml.parse(_body)
%}
{"foobar":"{{body.foobar.foo .. " " .. body.foobar.bar}}"}
EOF
)
```
Create a Route with `body-transformer` that uses the template:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "body-transformer-route",
"uri": "/anything",
"plugins": {
"body-transformer": {
"request": {
"template": "'"$req_template"'"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route with a YAML body:
```shell
body='
foobar:
foo: hello
bar: world'
curl "http://127.0.0.1:9080/anything" -X POST \
-d "$body" \
-H "Content-Type: text/yaml" \
-i
```
You should see a response similar to the following, which verifies that the YAML body was appropriately transformed to JSON:
```json
{
"args": {},
"data": "{\"foobar\":\"hello world\"}",
...
"json": {
"foobar": "hello world"
},
...
}
```
### Transform Form URL Encoded Body to JSON
The following example demonstrates how to transform `form-urlencoded` body to JSON.
Create a Route with `body-transformer` which sets the `input_format` to `encoded` and configures a template that appends string `world` to the `name` input, add `10` to the `age` input:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "body-transformer-route",
"uri": "/anything",
"plugins": {
"body-transformer": {
"request": {
"input_format": "encoded",
"template": "{\"foo\":\"{{name .. \" world\"}}\",\"bar\":{{age+10}}}"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a POST request to the Route with an encoded body:
```shell
curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/x-www-form-urlencoded" \
-d 'name=hello&age=20'
```
You should see a response similar to the following:
```json
{
"args": {},
"data": "",
"files": {},
"form": {
"{\"foo\":\"hello world\",\"bar\":30}": ""
},
"headers": {
...
},
...
}
```
### Transform GET Request Query Parameter to Body
The following example demonstrates how to transform a GET request query parameter to request body. Note that this does not transform the HTTP method. To transform the method, see [`proxy-rewrite`](./proxy-rewrite.md).
Create a Route with `body-transformer`, which sets the `input_format` to `args` and configures a template that adds a message to the request:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "body-transformer-route",
"uri": "/anything",
"plugins": {
"body-transformer": {
"request": {
"input_format": "args",
"template": "{\"message\": \"hello {{name}}\"}"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a GET request to the Route:
```shell
curl "http://127.0.0.1:9080/anything?name=john"
```
You should see a response similar to the following:
```json
{
"args": {},
"data": "{\"message\": \"hello john\"}",
"files": {},
"form": {},
"headers": {
...
},
"json": {
"message": "hello john"
},
"method": "GET",
...
}
```
### Transform Plain Media Type
The following example demonstrates how to transform requests with `plain` media type.
Create a Route with `body-transformer`, which sets the `input_format` to `plain` and configures a template to remove `not` and a subsequent space from the body string:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "body-transformer-route",
"uri": "/anything",
"plugins": {
"body-transformer": {
"request": {
"input_format": "plain",
"template": "{\"message\": \"{* string.gsub(_body, \"not \", \"\") *}\"}"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a POST request to the Route:
```shell
curl "http://127.0.0.1:9080/anything" -X POST \
-d 'not actually json' \
-i
```
You should see a response similar to the following:
```json
{
"args": {},
"data": "",
"files": {},
"form": {
"{\"message\": \"actually json\"}": ""
},
"headers": {
...
},
...
}
```
### Transform Multipart Media Type
The following example demonstrates how to transform requests with `multipart` media type.
Create a request transformation template which adds a `status` to the body based on the `age` provided in the request body:
```shell
req_template=$(cat <<EOF | awk '{gsub(/"/,"\\\"");};1'
{%
local core = require 'apisix.core'
local cjson = require 'cjson'
if tonumber(context.age) > 18 then
context._multipart:set_simple("status", "adult")
else
context._multipart:set_simple("status", "minor")
end
local body = context._multipart:tostring()
%}{* body *}
EOF
)
```
Create a Route with `body-transformer`, which sets the `input_format` to `multipart` and uses the previously created request template for transformation:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "body-transformer-route",
"uri": "/anything",
"plugins": {
"body-transformer": {
"request": {
"input_format": "multipart",
"template": "'"$req_template"'"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a multipart POST request to the Route:
```shell
curl -X POST \
-F "name=john" \
-F "age=10" \
"http://127.0.0.1:9080/anything"
```
You should see a response similar to the following:
```json
{
"args": {},
"data": "",
"files": {},
"form": {
"age": "10",
"name": "john",
"status": "minor"
},
"headers": {
"Accept": "*/*",
"Content-Length": "361",
"Content-Type": "multipart/form-data; boundary=------------------------qtPjk4c8ZjmGOXNKzhqnOP",
...
},
...
}
```

View File

@@ -0,0 +1,138 @@
---
title: brotli
keywords:
- Apache APISIX
- API Gateway
- Plugin
- brotli
description: This document contains information about the Apache APISIX brotli Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `brotli` Plugin dynamically sets the behavior of [brotli in Nginx](https://github.com/google/ngx_brotli).
## Prerequisites
This Plugin requires brotli shared libraries.
The example commands to build and install brotli shared libraries:
``` shell
wget https://github.com/google/brotli/archive/refs/tags/v1.1.0.zip
unzip v1.1.0.zip
cd brotli-1.1.0 && mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local/brotli ..
sudo cmake --build . --config Release --target install
sudo sh -c "echo /usr/local/brotli/lib >> /etc/ld.so.conf.d/brotli.conf"
sudo ldconfig
```
:::caution
If the upstream is returning a compressed response, then the Brotli plugin won't be able to compress it.
:::
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|----------------|----------------------|----------|---------------|--------------|-----------------------------------------------------------------------------------------|
| types | array[string] or "*" | False | ["text/html"] | | Dynamically sets the `brotli_types` directive. Special value `"*"` matches any MIME type. |
| min_length | integer | False | 20 | >= 1 | Dynamically sets the `brotli_min_length` directive. |
| comp_level | integer | False | 6 | [0, 11] | Dynamically sets the `brotli_comp_level` directive. |
| mode | integer | False | 0 | [0, 2] | Dynamically sets the `brotli decompress mode`, more info in [RFC 7932](https://tools.ietf.org/html/rfc7932). |
| lgwin | integer | False | 19 | [0, 10-24] | Dynamically sets the `brotli sliding window size`, `lgwin` is Base 2 logarithm of the sliding window size, set to `0` lets compressor decide over the optimal value, more info in [RFC 7932](https://tools.ietf.org/html/rfc7932). |
| lgblock | integer | False | 0 | [0, 16-24] | Dynamically sets the `brotli input block size`, `lgblock` is Base 2 logarithm of the maximum input block size, set to `0` lets compressor decide over the optimal value, more info in [RFC 7932](https://tools.ietf.org/html/rfc7932). |
| http_version | number | False | 1.1 | 1.1, 1.0 | Like the `gzip_http_version` directive, sets the minimum HTTP version of a request required to compress a response. |
| vary | boolean | False | false | | Like the `gzip_vary` directive, enables or disables inserting the “Vary: Accept-Encoding” response header field. |
## Enable Plugin
The example below enables the `brotli` Plugin on the specified Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/",
"plugins": {
"brotli": {
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
```
## Example usage
Once you have configured the Plugin as shown above, you can make a request as shown below:
```shell
curl http://127.0.0.1:9080/ -i -H "Accept-Encoding: br"
```
```
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Date: Tue, 05 Dec 2023 03:06:49 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Server: APISIX/3.6.0
Content-Encoding: br
Warning: Binary output can mess up your terminal. Use "--output -" to tell
Warning: curl to output it to your terminal anyway, or consider "--output
Warning: <FILE>" to save to a file.
```
## Delete Plugin
To remove the `brotli` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/",
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
```

View File

@@ -0,0 +1,117 @@
---
title: cas-auth
keywords:
- Apache APISIX
- API Gateway
- Plugin
- CAS AUTH
- cas-auth
description: This document contains information about the Apache APISIX cas-auth Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `cas-auth` Plugin can be used to access CAS (Central Authentication Service 2.0) IdP (Identity Provider)
to do authentication, from the SP (service provider) perspective.
## Attributes
| Name | Type | Required | Description |
| ----------- | ----------- | ----------- | ----------- |
| `idp_uri` | string | True | URI of IdP. |
| `cas_callback_uri` | string | True | redirect uri used to callback the SP from IdP after login or logout. |
| `logout_uri` | string | True | logout uri to trigger logout. |
## Enable Plugin
You can enable the Plugin on a specific Route as shown below:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/cas1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET", "POST"],
"host" : "127.0.0.1",
"uri": "/anything/*",
"plugins": {
"cas-auth": {
"idp_uri": "http://127.0.0.1:8080/realms/test/protocol/cas",
"cas_callback_uri": "/anything/cas_callback",
"logout_uri": "/anything/logout"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
```
## Configuration description
Once you have enabled the Plugin, a new user visiting this Route would first be processed by the `cas-auth` Plugin.
If no login session exists, the user would be redirected to the login page of `idp_uri`.
After successfully logging in from IdP, IdP will redirect this user to the `cas_callback_uri` with
GET parameters CAS ticket specified. If the ticket gets verified, the login session would be created.
This process is only done once and subsequent requests are left uninterrupted.
Once this is done, the user is redirected to the original URL they wanted to visit.
Later, the user could visit `logout_uri` to start logout process. The user would be redirected to `idp_uri` to do logout.
Note that, `cas_callback_uri` and `logout_uri` should be
either full qualified address (e.g. `http://127.0.0.1:9080/anything/logout`),
or path only (e.g. `/anything/logout`), but it is recommended to be path only to keep consistent.
These uris need to be captured by the route where the current APISIX is located.
For example, if the `uri` of the current route is `/api/v1/*`, `cas_callback_uri` can be filled in as `/api/v1/cas_callback`.
## Delete Plugin
To remove the `cas-auth` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/cas1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET", "POST"],
"uri": "/anything/*",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```

View File

@@ -0,0 +1,284 @@
---
title: chaitin-waf
keywords:
- Apache APISIX
- API Gateway
- Plugin
- WAF
description: This document contains basic information about the Apache APISIX `chaitin-waf` plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
After enabling the chaitin-waf plugin, the traffic will be forwarded to the Chaitin WAF service for detection and
prevention of various web application attacks, ensuring the security of the application and user data.
## Response Headers
Depending on the plugin configuration, it is optional to add additional response headers.
The response headers are listed below:
- **X-APISIX-CHAITIN-WAF**: Whether APISIX forwards the request to the WAF server.
- yes: forwarded
- no: not forwarded
- unhealthy: matches the match variables, but no WAF server is available.
- err: an error occurred during the execution of the plugin. Also includes the **X-APISIX-CHAITIN-WAF-ERROR** header.
- waf-err: error while interacting with the WAF server. Also includes the **X-APISIX-CHAITIN-WAF-ERROR** header.
- timeout: request to the WAF server timed out.
- **X-APISIX-CHAITIN-WAF-ERROR**: Debug header. Contains WAF error message.
- **X-APISIX-CHAITIN-WAF-TIME**: The time in milliseconds that APISIX spent interacting with WAF.
- **X-APISIX-CHAITIN-WAF-STATUS**: The status code returned to APISIX by the WAF server.
- **X-APISIX-CHAITIN-WAF-ACTION**: The action returned to APISIX by the WAF server.
- pass: request valid and passed.
- reject: request rejected by WAF service.
- **X-APISIX-CHAITIN-WAF-SERVER**: Debug header. Indicates which WAF server was selected.
## Plugin Metadata
| Name | Type | Required | Default value | Description |
|--------------------------|---------------|----------|---------------|------------------------------------------------------------------------------------------------------------------------------|
| nodes | array(object) | true | | A list of addresses for the Chaitin SafeLine WAF service. |
| nodes[0].host | string | true | | The address of Chaitin SafeLine WAF service. Supports IPv4, IPv6, Unix Socket, etc. |
| nodes[0].port | integer | false | 80 | The port of the Chaitin SafeLine WAF service. |
| mode | string | false | block | The global default mode if a Route doesn't specify its own: `off`, `monitor`, or `block`. |
| config | object | false | | WAF configuration defaults if none are specified on the Route. |
| config.connect_timeout | integer | false | 1000 | Connect timeout, in milliseconds. |
| config.send_timeout | integer | false | 1000 | Send timeout, in milliseconds. |
| config.read_timeout | integer | false | 1000 | Read timeout, in milliseconds. |
| config.req_body_size | integer | false | 1024 | Request body size, in KB. |
| config.keepalive_size | integer | false | 256 | Maximum concurrent idle connections to the SafeLine WAF detection service. |
| config.keepalive_timeout | integer | false | 60000 | Idle connection timeout, in milliseconds. |
| config.real_client_ip | boolean | false | true | Specifies whether to use the `X-Forwarded-For` as the client IP (if present). If `false`, uses the direct client IP from the connection. |
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```bash
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/chaitin-waf -H "X-API-KEY: $admin_key" -X PUT -d '
{
"nodes": [
{
"host": "unix:/path/to/safeline/resources/detector/snserver.sock",
"port": 8000
}
],
"mode": "block",
"config": {
"real_client_ip": true
}
}'
```
## Attributes
| Name | Type | Required | Default value | Description |
|--------------------------|---------------|----------|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| mode | string | false | block | Determines how the plugin behaves for matched requests. Valid values are `off`, `monitor`, or `block`. When set to `off`, the plugin skips WAF checks. In `monitor` mode, the plugin logs potential blocks without actually blocking the request. In `block` mode, the plugin enforces blocks as determined by the WAF service. |
| match | array[object] | false | | A list of matching rules. The plugin evaluates these rules to decide whether to perform the WAF check on a request. If empty, all requests are processed. |
| match.vars | array[array] | false | | List of variables used for matching requests. Each rule is specified as `[variable, operator, value]` (for example, `["http_waf", "==", "true"]`). These variables refer to NGINX internal variables. For supported operators, see [lua-resty-expr](https://github.com/api7/lua-resty-expr#operator-list). |
| append_waf_resp_header | bool | false | true | Determines whether the plugin adds WAF-related response headers (such as `X-APISIX-CHAITIN-WAF`, `X-APISIX-CHAITIN-WAF-ACTION`, etc.) to the response. |
| append_waf_debug_header | bool | false | false | Determines whether debugging headers (such as `X-APISIX-CHAITIN-WAF-ERROR` and `X-APISIX-CHAITIN-WAF-SERVER`) are added. Effective only when `append_waf_resp_header` is enabled. |
| config | object | false | | Provides route-specific configuration for the Chaitin SafeLine WAF service. Settings here override the corresponding metadata defaults when specified. |
| config.connect_timeout | integer | false | 1000 | The connect timeout for the WAF server, in milliseconds. |
| config.send_timeout | integer | false | 1000 | The send timeout for transmitting data to the WAF server, in milliseconds. |
| config.read_timeout | integer | false | 1000 | The read timeout for receiving data from the WAF server, in milliseconds. |
| config.req_body_size | integer | false | 1024 | The maximum allowed request body size, in KB. |
| config.keepalive_size | integer | false | 256 | The maximum number of idle connections to the WAF detection service that can be maintained concurrently. |
| config.keepalive_timeout | integer | false | 60000 | The idle connection timeout for the WAF service, in milliseconds. |
| config.real_client_ip | boolean | false | true | Specifies whether to determine the client IP from the `X-Forwarded-For` header. If set to `false`, the plugin uses the direct client IP from the connection. |
Below is a sample Route configuration that uses:
- httpbun.org as the upstream backend.
- mode set to monitor, so the plugin only logs potential blocks.
- A matching rule that triggers the plugin when the custom header waf: true is set.
- An override to disable the `real client IP` logic by setting config.real_client_ip to false.
```bash
curl http://127.0.0.1:9180/apisix/admin/routes/1 \
-H "X-API-KEY: $admin_key" \
-X PUT -d '
{
"uri": "/*",
"plugins": {
"chaitin-waf": {
"mode": "monitor",
"match": [
{
"vars": [
["http_waf", "==", "true"]
]
}
],
"config": {
"real_client_ip": false
},
"append_waf_resp_header": true,
"append_waf_debug_header": false
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbun.org:80": 1
}
}
}'
```
## Test Plugin
With the sample configuration described above (including your chosen `mode` and `real_client_ip` settings), the plugin behaves as follows:
- **If the `match` condition is not satisfied** (for example, `waf: true` is missing), the request proceeds normally without contacting the WAF. You can observe:
```bash
curl -H "Host: httpbun.org" http://127.0.0.1:9080/get -i
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 408
Connection: keep-alive
X-APISIX-CHAITIN-WAF: no
Date: Wed, 19 Jul 2023 09:30:42 GMT
X-Powered-By: httpbun/3c0dc05883dd9212ac38b04705037d50b02f2596
Server: APISIX/3.3.0
{
"args": {},
"headers": {
"Accept": "*/*",
"Connection": "close",
"Host": "httpbun.org",
"User-Agent": "curl/8.1.2",
"X-Forwarded-For": "127.0.0.1",
"X-Forwarded-Host": "httpbun.org",
"X-Forwarded-Port": "9080",
"X-Forwarded-Proto": "http",
"X-Real-Ip": "127.0.0.1"
},
"method": "GET",
"origin": "127.0.0.1, 122.231.76.178",
"url": "http://httpbun.org/get"
}
```
- **Potential injection requests** (e.g., containing SQL snippets) are forwarded unmodified if they do not meet the plugins match rules, and might result in a `404 Not Found` or other response from the upstream:
```bash
curl -H "Host: httpbun.org" http://127.0.0.1:9080/getid=1%20AND%201=1 -i
HTTP/1.1 404 Not Found
Content-Type: text/plain; charset=utf-8
Content-Length: 19
Connection: keep-alive
X-APISIX-CHAITIN-WAF: no
Date: Wed, 19 Jul 2023 09:30:28 GMT
X-Content-Type-Options: nosniff
X-Powered-By: httpbun/3c0dc05883dd9212ac38b04705037d50b02f2596
Server: APISIX/3.3.0
404 page not found
```
- **Matching safe requests** (those that satisfy `match.vars`, such as `-H "waf: true"`) are checked by the WAF. If deemed harmless, you see:
```bash
curl -H "Host: httpbun.org" -H "waf: true" http://127.0.0.1:9080/get -i
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 427
Connection: keep-alive
X-APISIX-CHAITIN-WAF-TIME: 2
X-APISIX-CHAITIN-WAF-STATUS: 200
X-APISIX-CHAITIN-WAF: yes
X-APISIX-CHAITIN-WAF-ACTION: pass
Date: Wed, 19 Jul 2023 09:29:58 GMT
X-Powered-By: httpbun/3c0dc05883dd9212ac38b04705037d50b02f2596
Server: APISIX/3.3.0
{
"args": {},
"headers": {
"Accept": "*/*",
"Connection": "close",
"Host": "httpbun.org",
"User-Agent": "curl/8.1.2",
"Waf": "true",
"X-Forwarded-For": "127.0.0.1",
"X-Forwarded-Host": "httpbun.org",
"X-Forwarded-Port": "9080",
"X-Forwarded-Proto": "http",
"X-Real-Ip": "127.0.0.1"
},
"method": "GET",
"origin": "127.0.0.1, 122.231.76.178",
"url": "http://httpbun.org/get"
}
```
- **Suspicious requests** that meet the plugins match rules and are flagged by the WAF are typically rejected with a 403 status, along with headers that include `X-APISIX-CHAITIN-WAF-ACTION: reject`. For example:
```bash
curl -H "Host: httpbun.org" -H "waf: true" http://127.0.0.1:9080/getid=1%20AND%201=1 -i
HTTP/1.1 403 Forbidden
Date: Wed, 19 Jul 2023 09:29:06 GMT
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
X-APISIX-CHAITIN-WAF: yes
X-APISIX-CHAITIN-WAF-TIME: 2
X-APISIX-CHAITIN-WAF-ACTION: reject
X-APISIX-CHAITIN-WAF-STATUS: 403
Server: APISIX/3.3.0
Set-Cookie: sl-session=UdywdGL+uGS7q8xMfnJlbQ==; Domain=; Path=/; Max-Age=86400
{"code": 403, "success":false, "message": "blocked by Chaitin SafeLine Web Application Firewall", "event_id": "51a268653f2c4189bfa3ec66afbcb26d"}
```
## Delete Plugin
To remove the `chaitin-waf` plugin, you can delete the corresponding JSON configuration from the Plugin configuration.
APISIX will automatically reload and you do not have to restart for this to take effect:
```bash
$ curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/*",
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbun.org:80": 1
}
}
}'
```

View File

@@ -0,0 +1,207 @@
---
title: clickhouse-logger
keywords:
- Apache APISIX
- API Gateway
- Plugin
- ClickHouse Logger
description: This document contains information about the Apache APISIX clickhouse-logger Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `clickhouse-logger` Plugin is used to push logs to [ClickHouse](https://clickhouse.com/) database.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|---------------|---------|----------|---------------------|--------------|----------------------------------------------------------------|
| endpoint_addr | Deprecated | True | | | Use `endpoint_addrs` instead. ClickHouse endpoints. |
| endpoint_addrs | array | True | | | ClickHouse endpoints. |
| database | string | True | | | Name of the database to store the logs. |
| logtable | string | True | | | Table name to store the logs. |
| user | string | True | | | ClickHouse username. |
| password | string | True | | | ClickHouse password. |
| timeout | integer | False | 3 | [1,...] | Time to keep the connection alive for after sending a request. |
| name | string | False | "clickhouse logger" | | Unique identifier for the logger. If you use Prometheus to monitor APISIX metrics, the name is exported in `apisix_batch_process_entries`. |
| ssl_verify | boolean | False | true | [true,false] | When set to `true`, verifies SSL. |
| log_format | object | False | | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
| include_req_body | boolean | False | false | [false, true] | When set to `true` includes the request body in the log. If the request body is too big to be kept in the memory, it can't be logged due to Nginx's limitations. |
| include_req_body_expr | array | False | | | Filter for when the `include_req_body` attribute is set to `true`. Request body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
| include_resp_body | boolean | False | false | [false, true] | When set to `true` includes the response body in the log. |
| include_resp_body_expr | array | False | | | Filter for when the `include_resp_body` attribute is set to `true`. Response body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
NOTE: `encrypt_fields = {"password"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
### Example of default log format
```json
{
"response": {
"status": 200,
"size": 118,
"headers": {
"content-type": "text/plain",
"connection": "close",
"server": "APISIX/3.7.0",
"content-length": "12"
}
},
"client_ip": "127.0.0.1",
"upstream_latency": 3,
"apisix_latency": 98.999998092651,
"upstream": "127.0.0.1:1982",
"latency": 101.99999809265,
"server": {
"version": "3.7.0",
"hostname": "localhost"
},
"route_id": "1",
"start_time": 1704507612177,
"service_id": "",
"request": {
"method": "POST",
"querystring": {
"foo": "unknown"
},
"headers": {
"host": "localhost",
"connection": "close",
"content-length": "18"
},
"size": 110,
"uri": "/hello?foo=unknown",
"url": "http://localhost:1984/hello?foo=unknown"
}
}
```
## Metadata
You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
| Name | Type | Required | Default | Description |
| ---------- | ------ | -------- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log_format | object | False | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
:::info IMPORTANT
Configuring the Plugin metadata is global in scope. This means that it will take effect on all Routes and Services which use the `clickhouse-logger` Plugin.
:::
The example below shows how you can configure through the Admin API:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/clickhouse-logger -H "X-API-KEY: $admin_key" -X PUT -d '
{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr"
}
}'
```
You can use the clickhouse docker image to create a container like so:
```shell
docker run -d -p 8123:8123 -p 9000:9000 -p 9009:9009 --name some-clickhouse-server --ulimit nofile=262144:262144 clickhouse/clickhouse-server
```
Then create a table in your ClickHouse database to store the logs.
```shell
curl -X POST 'http://localhost:8123/' \
--data-binary 'CREATE TABLE default.test (host String, client_ip String, route_id String, service_id String, `@timestamp` String, PRIMARY KEY(`@timestamp`)) ENGINE = MergeTree()' --user default:
```
## Enable Plugin
If multiple endpoints are configured, they will be written randomly.
The example below shows how you can enable the Plugin on a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"clickhouse-logger": {
"user": "default",
"password": "",
"database": "default",
"logtable": "test",
"endpoint_addrs": ["http://127.0.0.1:8123"]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"uri": "/hello"
}'
```
## Example usage
Now, if you make a request to APISIX, it will be logged in your ClickHouse database:
```shell
curl -i http://127.0.0.1:9080/hello
```
Now, if you check for the rows in the table, you will get the following output:
```shell
curl 'http://localhost:8123/?query=select%20*%20from%20default.test'
127.0.0.1 127.0.0.1 1 2023-05-08T19:15:53+05:30
```
## Delete Plugin
To remove the `clickhouse-logger` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,113 @@
---
title: client-control
keywords:
- Apache APISIX
- API Gateway
- Client Control
description: This document describes the Apache APISIX client-control Plugin, you can use it to control NGINX behavior to handle a client request dynamically.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `client-control` Plugin can be used to dynamically control the behavior of NGINX to handle a client request, by setting the max size of the request body.
:::info IMPORTANT
This Plugin requires APISIX to run on APISIX-Runtime. See [apisix-build-tools](https://github.com/api7/apisix-build-tools) for more info.
:::
## Attributes
| Name | Type | Required | Valid values | Description |
| ------------- | ------- | -------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------ |
| max_body_size | integer | False | [0,...] | Set the maximum limit for the client request body and dynamically adjust the size of [`client_max_body_size`](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size), measured in bytes. If you set the `max_body_size` to 0, then the size of the client's request body will not be checked. |
## Enable Plugin
The example below enables the Plugin on a specific Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"plugins": {
"client-control": {
"max_body_size" : 1
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```
## Example usage
Now since you have configured the `max_body_size` to `1` above, you will get the following message when you make a request:
```shell
curl -i http://127.0.0.1:9080/index.html -d '123'
```
```shell
HTTP/1.1 413 Request Entity Too Large
...
<html>
<head><title>413 Request Entity Too Large</title></head>
<body>
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>openresty</center>
</body>
</html>
```
## Delete Plugin
To remove the `client-control` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload, and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,347 @@
---
title: consumer-restriction
keywords:
- Apache APISIX
- API Gateway
- Consumer restriction
description: The Consumer Restriction Plugin allows users to configure access restrictions on Consumer, Route, Service, or Consumer Group.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `consumer-restriction` Plugin allows users to configure access restrictions on Consumer, Route, Service, or Consumer Group.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
| -------------------------- | ------------- | -------- | ------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| type | string | False | consumer_name | ["consumer_name", "consumer_group_id", "service_id", "route_id"] | Type of object to base the restriction on. |
| whitelist | array[string] | True | | | List of objects to whitelist. Has a higher priority than `allowed_by_methods`. |
| blacklist | array[string] | True | | | List of objects to blacklist. Has a higher priority than `whitelist`. |
| rejected_code | integer | False | 403 | [200,...] | HTTP status code returned when the request is rejected. |
| rejected_msg | string | False | | | Message returned when the request is rejected. |
| allowed_by_methods | array[object] | False | | | List of allowed configurations for Consumer settings, including a username of the Consumer and a list of allowed HTTP methods. |
| allowed_by_methods.user | string | False | | | A username for a Consumer. |
| allowed_by_methods.methods | array[string] | False | | ["GET", "POST", "PUT", "DELETE", "PATCH", "HEAD", "OPTIONS", "CONNECT", "TRACE", "PURGE"] | List of allowed HTTP methods for a Consumer. |
:::note
The different values in the `type` attribute have these meanings:
- `consumer_name`: Username of the Consumer to restrict access to a Route or a Service.
- `consumer_group_id`: ID of the Consumer Group to restrict access to a Route or a Service.
- `service_id`: ID of the Service to restrict access from a Consumer. Need to be used with an Authentication Plugin.
- `route_id`: ID of the Route to restrict access from a Consumer.
:::
## Example usage
### Restricting by `consumer_name`
The example below shows how you can use the `consumer-restriction` Plugin on a Route to restrict specific consumers.
You can first create two consumers `jack1` and `jack2`:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/consumers -H "X-API-KEY: $admin_key" -X PUT -i -d '
{
"username": "jack1",
"plugins": {
"basic-auth": {
"username":"jack2019",
"password": "123456"
}
}
}'
curl http://127.0.0.1:9180/apisix/admin/consumers -H "X-API-KEY: $admin_key" -X PUT -i -d '
{
"username": "jack2",
"plugins": {
"basic-auth": {
"username":"jack2020",
"password": "123456"
}
}
}'
```
Next, you can configure the Plugin to the Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"plugins": {
"basic-auth": {},
"consumer-restriction": {
"whitelist": [
"jack1"
]
}
}
}'
```
Now, this configuration will only allow `jack1` to access your Route:
```shell
curl -u jack2019:123456 http://127.0.0.1:9080/index.html
```
```shell
HTTP/1.1 200 OK
```
And requests from `jack2` are blocked:
```shell
curl -u jack2020:123456 http://127.0.0.1:9080/index.html -i
```
```shell
HTTP/1.1 403 Forbidden
...
{"message":"The consumer_name is forbidden."}
```
### Restricting by `allowed_by_methods`
The example below configures the Plugin to a Route to restrict `jack1` to only make `POST` requests:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"plugins": {
"basic-auth": {},
"consumer-restriction": {
"allowed_by_methods":[{
"user": "jack1",
"methods": ["POST"]
}]
}
}
}'
```
Now if `jack1` makes a `GET` request, the access is restricted:
```shell
curl -u jack2019:123456 http://127.0.0.1:9080/index.html
```
```shell
HTTP/1.1 403 Forbidden
...
{"message":"The consumer_name is forbidden."}
```
To also allow `GET` requests, you can update the Plugin configuration and it would be reloaded automatically:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"plugins": {
"basic-auth": {},
"consumer-restriction": {
"allowed_by_methods":[{
"user": "jack1",
"methods": ["POST","GET"]
}]
}
}
}'
```
Now, if a `GET` request is made:
```shell
curl -u jack2019:123456 http://127.0.0.1:9080/index.html
```
```shell
HTTP/1.1 200 OK
```
### Restricting by `service_id`
To restrict a Consumer from accessing a Service, you also need to use an Authentication Plugin. The example below uses the [key-auth](./key-auth.md) Plugin.
First, you can create two services:
```shell
curl http://127.0.0.1:9180/apisix/admin/services/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"upstream": {
"nodes": {
"127.0.0.1:1980": 1
},
"type": "roundrobin"
},
"desc": "new service 001"
}'
curl http://127.0.0.1:9180/apisix/admin/services/2 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"upstream": {
"nodes": {
"127.0.0.1:1980": 1
},
"type": "roundrobin"
},
"desc": "new service 002"
}'
```
Then configure the `consumer-restriction` Plugin on the Consumer with the `key-auth` Plugin and the `service_id` to whitelist.
```shell
curl http://127.0.0.1:9180/apisix/admin/consumers -H "X-API-KEY: $admin_key" -X PUT -d '
{
"username": "new_consumer",
"plugins": {
"key-auth": {
"key": "auth-jack"
},
"consumer-restriction": {
"type": "service_id",
"whitelist": [
"1"
],
"rejected_code": 403
}
}
}'
```
Finally, you can configure the `key-auth` Plugin and bind the service to the Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"service_id": 1,
"plugins": {
"key-auth": {
}
}
}'
```
Now, if you test the Route, you should be able to access the Service:
```shell
curl http://127.0.0.1:9080/index.html -H 'apikey: auth-jack' -i
```
```shell
HTTP/1.1 200 OK
...
```
Now, if the Route is configured to the Service with `service_id` `2`:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"service_id": 2,
"plugins": {
"key-auth": {
}
}
}'
```
Since the Service is not in the whitelist, it cannot be accessed:
```shell
curl http://127.0.0.1:9080/index.html -H 'apikey: auth-jack' -i
```
```shell
HTTP/1.1 403 Forbidden
...
{"message":"The service_id is forbidden."}
```
## Delete Plugin
To remove the `consumer-restriction` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"plugins": {
"basic-auth": {}
}
}'
```

View File

@@ -0,0 +1,142 @@
---
title: cors
keywords:
- Apache APISIX
- API Gateway
- CORS
description: This document contains information about the Apache APISIX cors Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `cors` Plugins lets you enable [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) easily.
## Attributes
### CORS attributes
| Name | Type | Required | Default | Description |
|---------------------------|---------|----------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| allow_origins | string | False | "*" | Origins to allow CORS. Use the `scheme://host:port` format. For example, `https://somedomain.com:8081`. If you have multiple origins, use a `,` to list them. If `allow_credential` is set to `false`, you can enable CORS for all origins by using `*`. If `allow_credential` is set to `true`, you can forcefully allow CORS on all origins by using `**` but it will pose some security issues. |
| allow_methods | string | False | "*" | Request methods to enable CORS on. For example `GET`, `POST`. Use `,` to add multiple methods. If `allow_credential` is set to `false`, you can enable CORS for all methods by using `*`. If `allow_credential` is set to `true`, you can forcefully allow CORS on all methods by using `**` but it will pose some security issues. |
| allow_headers | string | False | "*" | Headers in the request allowed when accessing a cross-origin resource. Use `,` to add multiple headers. If `allow_credential` is set to `false`, you can enable CORS for all request headers by using `*`. If `allow_credential` is set to `true`, you can forcefully allow CORS on all request headers by using `**` but it will pose some security issues. |
| expose_headers | string | False | | Headers in the response allowed when accessing a cross-origin resource. Use `,` to add multiple headers. If `allow_credential` is set to `false`, you can enable CORS for all response headers by using `*`. If not specified, the plugin will not modify the `Access-Control-Expose-Headers header`. See [Access-Control-Expose-Headers - MDN](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Expose-Headers) for more details. |
| max_age | integer | False | 5 | Maximum time in seconds the result is cached. If the time is within this limit, the browser will check the cached result. Set to `-1` to disable caching. Note that the maximum value is browser dependent. See [Access-Control-Max-Age](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Max-Age#Directives) for more details. |
| allow_credential | boolean | False | false | When set to `true`, allows requests to include credentials like cookies. According to CORS specification, if you set this to `true`, you cannot use '*' to allow all for the other attributes. |
| allow_origins_by_regex | array | False | nil | Regex to match origins that allow CORS. For example, `[".*\.test.com$"]` can match all subdomains of `test.com`. When set to specified range, only domains in this range will be allowed, no matter what `allow_origins` is. |
| allow_origins_by_metadata | array | False | nil | Origins to enable CORS referenced from `allow_origins` set in the Plugin metadata. For example, if `"allow_origins": {"EXAMPLE": "https://example.com"}` is set in the Plugin metadata, then `["EXAMPLE"]` can be used to allow CORS on the origin `https://example.com`. |
:::info IMPORTANT
1. The `allow_credential` attribute is sensitive and must be used carefully. If set to `true` the default value `*` of the other attributes will be invalid and they should be specified explicitly.
2. When using `**` you are vulnerable to security risks like CSRF. Make sure that this meets your security levels before using it.
:::
### Resource Timing attributes
| Name | Type | Required | Default | Description |
|---------------------------|---------|----------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| timing_allow_origins | string | False | nil | Origin to allow to access the resource timing information. See [Timing-Allow-Origin](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Timing-Allow-Origin). Use the `scheme://host:port` format. For example, `https://somedomain.com:8081`. If you have multiple origins, use a `,` to list them. |
| timing_allow_origins_by_regex | array | False | nil | Regex to match with origin for enabling access to the resource timing information. For example, `[".*\.test.com"]` can match all subdomain of `test.com`. When set to specified range, only domains in this range will be allowed, no matter what `timing_allow_origins` is. |
:::note
The Timing-Allow-Origin header is defined in the Resource Timing API, but it is related to the CORS concept.
Suppose you have 2 domains, `domain-A.com` and `domain-B.com`.
You are on a page on `domain-A.com`, you have an XHR call to a resource on `domain-B.com` and you need its timing information.
You can allow the browser to show this timing information only if you have cross-origin permissions on `domain-B.com`.
So, you have to set the CORS headers first, then access the `domain-B.com` URL, and if you set [Timing-Allow-Origin](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Timing-Allow-Origin), the browser will show the requested timing information.
:::
## Metadata
| Name | Type | Required | Description |
|---------------|--------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| allow_origins | object | False | A map with origin reference and allowed origins. The keys in the map are used in the attribute `allow_origins_by_metadata` and the value are equivalent to the `allow_origins` attribute of the Plugin. |
## Enable Plugin
You can enable the Plugin on a specific Route or Service:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"plugins": {
"cors": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:8080": 1
}
}
}'
```
## Example usage
After enabling the Plugin, you can make a request to the server and see the CORS headers returned:
```shell
curl http://127.0.0.1:9080/hello -v
```
```shell
...
< Server: APISIX web server
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: *
< Access-Control-Allow-Headers: *
< Access-Control-Max-Age: 5
...
```
## Delete Plugin
To remove the `cors` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:8080": 1
}
}
}'
```

View File

@@ -0,0 +1,154 @@
---
title: csrf
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Cross-site request forgery
- csrf
description: The CSRF Plugin can be used to protect your API against CSRF attacks using the Double Submit Cookie method.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `csrf` Plugin can be used to protect your API against [CSRF attacks](https://en.wikipedia.org/wiki/Cross-site_request_forgery) using the [Double Submit Cookie](https://en.wikipedia.org/wiki/Cross-site_request_forgery#Double_Submit_Cookie) method.
This Plugin considers the `GET`, `HEAD` and `OPTIONS` methods to be safe operations (`safe-methods`) and such requests are not checked for interception by an attacker. Other methods are termed as `unsafe-methods`.
## Attributes
| Name | Type | Required | Default | Description |
|---------|--------|----------|---------------------|---------------------------------------------------------------------------------------------|
| name | string | False | `apisix-csrf-token` | Name of the token in the generated cookie. |
| expires | number | False | `7200` | Expiration time in seconds of the CSRF cookie. Set to `0` to skip checking expiration time. |
| key | string | True | | Secret key used to encrypt the cookie. |
NOTE: `encrypt_fields = {"key"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
## Enable Plugin
The example below shows how you can enable the Plugin on a specific Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT-d '
{
"uri": "/hello",
"plugins": {
"csrf": {
"key": "edd1c9f034335f136f87ad84b625c8f1"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:9001": 1
}
}
}'
```
The Route is now protected and trying to access it with methods other than `GET` will be blocked with a 401 status code.
Sending a `GET` request to the `/hello` endpoint will send back a cookie with an encrypted token. The name of the token can be set through the `name` attribute of the Plugin configuration and if unset, it defaults to `apisix-csrf-token`.
:::note
A new cookie is returned for each request.
:::
For subsequent requests with `unsafe-methods`, you need to read the encrypted token from the cookie and append the token to the request header by setting the field name to the `name` attribute in the Plugin configuration.
## Example usage
After you have configured the Plugin as shown above, trying to directly make a `POST` request to the `/hello` Route will result in an error:
```shell
curl -i http://127.0.0.1:9080/hello -X POST
```
```shell
HTTP/1.1 401 Unauthorized
...
{"error_msg":"no csrf token in headers"}
```
To get the cookie with the encrypted token, you can make a `GET` request:
```shell
curl -i http://127.0.0.1:9080/hello
```
```shell
HTTP/1.1 200 OK
Set-Cookie: apisix-csrf-token=eyJyYW5kb20iOjAuNjg4OTcyMzA4ODM1NDMsImV4cGlyZXMiOjcyMDAsInNpZ24iOiJcL09uZEF4WUZDZGYwSnBiNDlKREtnbzVoYkJjbzhkS0JRZXVDQm44MG9ldz0ifQ==;path=/;Expires=Mon, 13-Dec-21 09:33:55 GMT
```
This token must then be read from the cookie and added to the request header for subsequent `unsafe-methods` requests.
For example, you can use [js-cookie](https://github.com/js-cookie/js-cookie) to read the cookie and [axios](https://github.com/axios/axios) to send requests:
```js
const token = Cookie.get('apisix-csrf-token');
const instance = axios.create({
headers: {'apisix-csrf-token': token}
});
```
Also make sure that you carry the cookie.
You can also use curl to send the request:
```shell
curl -i http://127.0.0.1:9080/hello -X POST -H 'apisix-csrf-token: eyJyYW5kb20iOjAuNjg4OTcyMzA4ODM1NDMsImV4cGlyZXMiOjcyMDAsInNpZ24iOiJcL09uZEF4WUZDZGYwSnBiNDlKREtnbzVoYkJjbzhkS0JRZXVDQm44MG9ldz0ifQ==' -b 'apisix-csrf-token=eyJyYW5kb20iOjAuNjg4OTcyMzA4ODM1NDMsImV4cGlyZXMiOjcyMDAsInNpZ24iOiJcL09uZEF4WUZDZGYwSnBiNDlKREtnbzVoYkJjbzhkS0JRZXVDQm44MG9ldz0ifQ=='
```
```shell
HTTP/1.1 200 OK
```
## Delete Plugin
To remove the `csrf` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,164 @@
---
title: datadog
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Datadog
description: This document contains information about the Apache APISIX datadog Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `datadog` monitoring Plugin is for seamless integration of APISIX with [Datadog](https://www.datadoghq.com/), one of the most used monitoring and observability platform for cloud applications.
When enabled, the Plugin supports multiple metric capture types for request and response cycles.
This Plugin, pushes its custom metrics to the [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/?tab=hostagent) server over UDP protocol and comes bundled with [Datadog agent](https://docs.datadoghq.com/agent/).
DogStatsD implements the StatsD protocol which collects the custom metrics for the Apache APISIX agent, aggregates them into a single data point, and sends it to the configured Datadog server.
This Plugin provides the ability to push metrics as a batch to the external Datadog agent, reusing the same datagram socket. It might take some time to receive the log data. It will be automatically sent after the timer function in the [batch processor](../batch-processor.md) expires.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
| ----------- | ------- | -------- | ------- | ------------ | -------------------------------------------------------------------------------------- |
| prefer_name | boolean | False | true | [true,false] | When set to `false`, uses Route/Service ID instead of name (default) with metric tags. |
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
## Metadata
You can configure the Plugin through the Plugin metadata.
| Name | Type | Required | Default | Description |
| ------------- | ------- | -------- | ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| host | string | False | "127.0.0.1" | DogStatsD server host address. |
| port | integer | False | 8125 | DogStatsD server host port. |
| namespace | string | False | "apisix" | Prefix for all custom metrics sent by APISIX agent. Useful for finding entities for metrics graph. For example, `apisix.request.counter`. |
| constant_tags | array | False | [ "source:apisix" ] | Static tags to embed into generated metrics. Useful for grouping metrics over certain signals. |
:::tip
See [defining tags](https://docs.datadoghq.com/getting_started/tagging/#defining-tags) to know more about how to effectively use tags.
:::
By default, the Plugin expects the DogStatsD service to be available at `127.0.0.1:8125`. If you want to change this, you can update the Plugin metadata as shown below:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/datadog -H "X-API-KEY: $admin_key" -X PUT -d '
{
"host": "172.168.45.29",
"port": 8126,
"constant_tags": [
"source:apisix",
"service:custom"
],
"namespace": "apisix"
}'
```
To reset to default configuration, make a PUT request with empty body:
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/datadog -H "X-API-KEY: $admin_key" -X PUT -d '{}'
```
## Exported metrics
When the `datadog` Plugin is enabled, the APISIX agent exports the following metrics to the DogStatsD server for each request/response cycle:
| Metric name | StatsD type | Description |
| ---------------- | ----------- | ----------------------------------------------------------------------------------------------------- |
| Request Counter | Counter | Number of requests received. |
| Request Latency | Histogram | Time taken to process the request (in milliseconds). |
| Upstream latency | Histogram | Time taken to proxy the request to the upstream server till a response is received (in milliseconds). |
| APISIX Latency | Histogram | Time taken by APISIX agent to process the request (in milliseconds). |
| Ingress Size | Timer | Request body size in bytes. |
| Egress Size | Timer | Response body size in bytes. |
The metrics will be sent to the DogStatsD agent with the following tags:
- `route_name`: Name specified in the Route schema definition. If not present or if the attribute `prefer_name` is set to false, falls back to the Route ID.
- `service_name`: If a Route has been created with an abstracted Service, the Service name/ID based on the attribute `prefer_name`.
- `consumer`: If the Route is linked to a Consumer, the username will be added as a tag.
- `balancer_ip`: IP address of the Upstream balancer that processed the current request.
- `response_status`: HTTP response status code.
- `scheme`: Request scheme such as HTTP, gRPC, and gRPCs.
:::note
If there are no suitable values for any particular tag, the tag will be omitted.
:::
## Enable Plugin
Once you have your Datadog agent running, you can enable the Plugin as shown below:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"datadog": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"uri": "/hello"
}'
```
Now, requests to the endpoint `/hello` will generate metrics and push it to the DogStatsD server.
## Delete Plugin
To remove the `datadog` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,337 @@
---
title: degraphql
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Degraphql
description: This document contains information about the Apache APISIX degraphql Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `degraphql` Plugin is used to support decoding RESTful API to GraphQL.
## Attributes
| Name | Type | Required | Description |
| -------------- | ------ | -------- | -------------------------------------------------------------------------------------------- |
| query | string | True | The GraphQL query sent to the upstream |
| operation_name | string | False | The name of the operation, is only required if multiple operations are present in the query. |
| variables | array | False | The variables used in the GraphQL query |
## Example usage
### Start GraphQL server
We use docker to deploy a [GraphQL server demo](https://github.com/npalm/graphql-java-demo) as the backend.
```bash
docker run -d --name grapql-demo -p 8080:8080 npalm/graphql-java-demo
```
After starting the server, the following endpoints are now available:
- http://localhost:8080/graphiql - GraphQL IDE - GrahphiQL
- http://localhost:8080/playground - GraphQL IDE - Prisma GraphQL Client
- http://localhost:8080/altair - GraphQL IDE - Altair GraphQL Client
- http://localhost:8080/ - A simple reacter
- ws://localhost:8080/subscriptions
### Enable Plugin
#### Query list
If we have a GraphQL query like this:
```graphql
query {
persons {
id
name
}
}
```
We can execute it on `http://localhost:8080/playground`, and get the data as below:
```json
{
"data": {
"persons": [
{
"id": "7",
"name": "Niek"
},
{
"id": "8",
"name": "Josh"
},
......
]
}
}
```
Now we can use RESTful API to query the same data that is proxy by APISIX.
First, we need to create a route in APISIX, and enable the degreaph plugin on the route, we need to define the GraphQL query in the plugin's config.
```bash
curl --location --request PUT 'http://localhost:9180/apisix/admin/routes/1' \
--header 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' \
--header 'Content-Type: application/json' \
--data-raw '{
"uri": "/graphql",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:8080": 1
}
},
"plugins": {
"degraphql": {
"query": "{\n persons {\n id\n name\n }\n}\n"
}
}
}'
```
We convert the GraphQL query
```graphql
{
persons {
id
name
}
}
```
to JSON string `"{\n persons {\n id\n name\n }\n}\n"`, and put it in the plugin's configuration.
Then we can query the data by RESTful API:
```bash
curl --location --request POST 'http://localhost:9080/graphql'
```
and get the result:
```json
{
"data": {
"persons": [
{
"id": "7",
"name": "Niek"
},
{
"id": "8",
"name": "Josh"
},
......
]
}
}
```
#### Query with variables
If we have a GraphQL query like this:
```graphql
query($name: String!, $githubAccount: String!) {
persons(filter: { name: $name, githubAccount: $githubAccount }) {
id
name
blog
githubAccount
talks {
id
title
}
}
}
variables:
{
"name": "Niek",
"githubAccount": "npalm"
}
```
we can execute it on `http://localhost:8080/playground`, and get the data as below:
```json
{
"data": {
"persons": [
{
"id": "7",
"name": "Niek",
"blog": "https://040code.github.io",
"githubAccount": "npalm",
"talks": [
{
"id": "19",
"title": "GraphQL - The Next API Language"
},
{
"id": "20",
"title": "Immutable Infrastructure"
}
]
}
]
}
}
```
We convert the GraphQL query to JSON string like `"query($name: String!, $githubAccount: String!) {\n persons(filter: { name: $name, githubAccount: $githubAccount }) {\n id\n name\n blog\n githubAccount\n talks {\n id\n title\n }\n }\n}"`, so we create a route like this:
```bash
curl --location --request PUT 'http://localhost:9180/apisix/admin/routes/1' \
--header 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' \
--header 'Content-Type: application/json' \
--data-raw '{
"uri": "/graphql",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:8080": 1
}
},
"plugins": {
"degraphql": {
"query": "query($name: String!, $githubAccount: String!) {\n persons(filter: { name: $name, githubAccount: $githubAccount }) {\n id\n name\n blog\n githubAccount\n talks {\n id\n title\n }\n }\n}",
"variables": [
"name",
"githubAccount"
]
}
}
}'
```
We define the `variables` in the plugin's config, and the `variables` is an array, which contains the variables' name in the GraphQL query, so that we can pass the query variables by RESTful API.
Query the data by RESTful API that proxy by APISIX:
```bash
curl --location --request POST 'http://localhost:9080/graphql' \
--header 'Content-Type: application/json' \
--data-raw '{
"name": "Niek",
"githubAccount": "npalm"
}'
```
and get the result:
```json
{
"data": {
"persons": [
{
"id": "7",
"name": "Niek",
"blog": "https://040code.github.io",
"githubAccount": "npalm",
"talks": [
{
"id": "19",
"title": "GraphQL - The Next API Language"
},
{
"id": "20",
"title": "Immutable Infrastructure"
}
]
}
]
}
}
```
which is the same as the result of the GraphQL query.
It's also possible to get the same result via GET request:
```bash
curl 'http://localhost:9080/graphql?name=Niek&githubAccount=npalm'
```
```json
{
"data": {
"persons": [
{
"id": "7",
"name": "Niek",
"blog": "https://040code.github.io",
"githubAccount": "npalm",
"talks": [
{
"id": "19",
"title": "GraphQL - The Next API Language"
},
{
"id": "20",
"title": "Immutable Infrastructure"
}
]
}
]
}
}
```
In the GET request, the variables are passed in the query string.
## Delete Plugin
To remove the `degraphql` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/graphql",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:8080": 1
}
}
}'
```

View File

@@ -0,0 +1,191 @@
---
title: dubbo-proxy
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Apache Dubbo
- dubbo-proxy
description: This document contains information about the Apache APISIX dubbo-proxy Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `dubbo-proxy` Plugin allows you to proxy HTTP requests to [Apache Dubbo](https://dubbo.apache.org/en/index.html).
:::info IMPORTANT
If you are using OpenResty, you need to build it with Dubbo support. See [How do I build the APISIX runtime environment](./../FAQ.md#how-do-i-build-the-apisix-runtime-environment) for details.
:::
## Runtime Attributes
| Name | Type | Required | Default | Description |
| --------------- | ------ | -------- | -------------------- | ------------------------------- |
| service_name | string | True | | Dubbo provider service name. |
| service_version | string | True | | Dubbo provider service version. |
| method | string | False | The path of the URI. | Dubbo provider service method. |
## Static Attributes
| Name | Type | Required | Default | Valid values | Description |
| ------------------------ | ------ | -------- | ------- | ------------ | --------------------------------------------------------------- |
| upstream_multiplex_count | number | True | 32 | >= 1 | Maximum number of multiplex requests in an upstream connection. |
## Enable Plugin
To enable the `dubbo-proxy` Plugin, you have to add it in your configuration file (`conf/config.yaml`):
```yaml title="conf/config.yaml"
plugins:
- ...
- dubbo-proxy
```
Now, when APISIX is reloaded, you can add it to a specific Route as shown below:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/upstreams/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"nodes": {
"127.0.0.1:20880": 1
},
"type": "roundrobin"
}'
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uris": [
"/hello"
],
"plugins": {
"dubbo-proxy": {
"service_name": "org.apache.dubbo.sample.tengine.DemoService",
"service_version": "0.0.0",
"method": "tengineDubbo"
}
},
"upstream_id": 1
}'
```
## Example usage
You can follow the [Quick Start](https://github.com/alibaba/tengine/tree/master/modules/mod_dubbo#quick-start) guide in Tengine with the configuration above for testing.
APISIX dubbo plugin uses `hessian2` as the serialization protocol. It supports only `Map<String, Object>` as the request and response data type.
### Application
Your dubbo config should be configured to use `hessian2` as the serialization protocol.
```yml
dubbo:
...
protocol:
...
serialization: hessian2
```
Your application should implement the interface with the request and response data type as `Map<String, Object>`.
```java
public interface DemoService {
Map<String, Object> sayHello(Map<String, Object> context);
}
```
### Request and Response
If you need to pass request data, you can add the data to the HTTP request header. The plugin will convert the HTTP request header to the request data of the Dubbo service. Here is a sample HTTP request that passes `user` information:
```bash
curl -i -X POST 'http://localhost:9080/hello' \
--header 'user: apisix'
HTTP/1.1 200 OK
Date: Mon, 15 Jan 2024 10:15:57 GMT
Content-Type: text/plain; charset=utf-8
...
hello: apisix
...
Server: APISIX/3.8.0
```
If the returned data is:
```json
{
"status": "200",
"header1": "value1",
"header2": "value2",
"body": "body of the message"
}
```
The converted HTTP response will be:
```
HTTP/1.1 200 OK
...
header1: value1
header2: value2
...
body of the message
```
## Delete Plugin
To remove the `dubbo-proxy` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uris": [
"/hello"
],
"plugins": {
},
"upstream_id": 1
}
}'
```
To completely disable the `dubbo-proxy` Plugin, you can remove it from your configuration file (`conf/config.yaml`):
```yaml title="conf/config.yaml"
plugins:
# - dubbo-proxy
```

View File

@@ -0,0 +1,116 @@
---
title: echo
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Echo
description: This document contains information about the Apache APISIX echo Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `echo` Plugin is to help users understand how they can develop an APISIX Plugin.
This Plugin addresses common functionalities in phases like init, rewrite, access, balancer, header filter, body filter and log.
:::caution WARNING
The `echo` Plugin is built as an example. It has missing cases and should **not** be used in production environments.
:::
## Attributes
| Name | Type | Requirement | Default | Valid | Description |
| ----------- | ------ | ----------- | ------- | ----- | ----------------------------------------- |
| before_body | string | optional | | | Body to use before the filter phase. |
| body | string | optional | | | Body that replaces the Upstream response. |
| after_body | string | optional | | | Body to use after the modification phase. |
| headers | object | optional | | | New headers to use for the response. |
At least one of `before_body`, `body`, and `after_body` must be specified.
## Enable Plugin
The example below shows how you can enable the `echo` Plugin for a specific Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"echo": {
"before_body": "before the body modification "
}
},
"upstream": {
"nodes": {
"127.0.0.1:1980": 1
},
"type": "roundrobin"
},
"uri": "/hello"
}'
```
## Example usage
First, we configure the Plugin as mentioned above. We can then make a request as shown below:
```shell
curl -i http://127.0.0.1:9080/hello
```
```
HTTP/1.1 200 OK
...
before the body modification hello world
```
## Delete Plugin
To remove the `echo` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,445 @@
---
title: elasticsearch-logger
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Elasticsearch-logger
description: The elasticsearch-logger Plugin pushes request and response logs in batches to Elasticsearch and supports the customization of log formats.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/elasticsearch-logger" />
</head>
## Description
The `elasticsearch-logger` Plugin pushes request and response logs in batches to [Elasticsearch](https://www.elastic.co) and supports the customization of log formats. When enabled, the Plugin will serialize the request context information to [Elasticsearch Bulk format](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html#docs-bulk) and add them to the queue, before they are pushed to Elasticsearch. See [batch processor](../batch-processor.md) for more details.
## Attributes
| Name | Type | Required | Default | Description |
| ------------- | ------- | -------- | --------------------------- | ------------------------------------------------------------ |
| endpoint_addrs | array[string] | True | | Elasticsearch API endpoint addresses. If multiple endpoints are configured, they will be written randomly. |
| field | object | True | | Elasticsearch `field` configuration. |
| field.index | string | True | | Elasticsearch [_index field](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-index-field.html#mapping-index-field). |
| log_format | object | False | | Custom log format in key-value pairs in JSON format. Support [APISIX](../apisix-variable.md) or [NGINX variables](http://nginx.org/en/docs/varindex.html) in values. |
| auth | array | False | | Elasticsearch [authentication](https://www.elastic.co/guide/en/elasticsearch/reference/current/setting-up-authentication.html) configuration. |
| auth.username | string | True | | Elasticsearch [authentication](https://www.elastic.co/guide/en/elasticsearch/reference/current/setting-up-authentication.html) username. |
| auth.password | string | True | | Elasticsearch [authentication](https://www.elastic.co/guide/en/elasticsearch/reference/current/setting-up-authentication.html) password. |
| ssl_verify | boolean | False | true | If true, perform SSL verification. |
| timeout | integer | False | 10 | Elasticsearch send data timeout in seconds. |
| include_req_body | boolean | False | false | If true, include the request body in the log. Note that if the request body is too big to be kept in the memory, it can not be logged due to NGINX's limitations. |
| include_req_body_expr | array[array] | False | | An array of one or more conditions in the form of [lua-resty-expr](https://github.com/api7/lua-resty-expr). Used when the `include_req_body` is true. Request body would only be logged when the expressions configured here evaluate to true. |
| include_resp_body | boolean | False | false | If true, include the response body in the log. |
| include_resp_body_expr | array[array] | False | | An array of one or more conditions in the form of [lua-resty-expr](https://github.com/api7/lua-resty-expr). Used when the `include_resp_body` is true. Response body would only be logged when the expressions configured here evaluate to true. |
NOTE: `encrypt_fields = {"auth.password"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
## Plugin Metadata
| Name | Type | Required | Default | Description |
|------|------|----------|---------|-------------|
| log_format | object | False | | Custom log format in key-value pairs in JSON format. Support [APISIX variables](../apisix-variable.md) and [NGINX variables](http://nginx.org/en/docs/varindex.html) in values. |
## Examples
The examples below demonstrate how you can configure `elasticsearch-logger` Plugin for different scenarios.
To follow along the examples, start an Elasticsearch instance in Docker:
```shell
docker run -d \
--name elasticsearch \
--network apisix-quickstart-net \
-v elasticsearch_vol:/usr/share/elasticsearch/data/ \
-p 9200:9200 \
-p 9300:9300 \
-e ES_JAVA_OPTS="-Xms512m -Xmx512m" \
-e discovery.type=single-node \
-e xpack.security.enabled=false \
docker.elastic.co/elasticsearch/elasticsearch:7.17.1
```
Start a Kibana instance in Docker to visualize the indexed data in Elasticsearch:
```shell
docker run -d \
--name kibana \
--network apisix-quickstart-net \
-p 5601:5601 \
-e ELASTICSEARCH_HOSTS="http://elasticsearch:9200" \
docker.elastic.co/kibana/kibana:7.17.1
```
If successful, you should see the Kibana dashboard on [localhost:5601](http://localhost:5601).
:::note
You can fetch the APISIX `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Log in the Default Log Format
The following example demonstrates how you can enable the `elasticsearch-logger` Plugin on a route, which logs client requests and responses to the Route and pushes logs to Elasticsearch.
Create a Route with `elasticsearch-logger` to configure the `index` field as `gateway`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "elasticsearch-logger-route",
"uri": "/anything",
"plugins": {
"elasticsearch-logger": {
"endpoint_addrs": ["http://elasticsearch:9200"],
"field": {
"index": "gateway"
}
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Send a request to the Route to generate a log entry:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should receive an `HTTP/1.1 200 OK` response.
Navigate to the Kibana dashboard on [localhost:5601](http://localhost:5601) and under __Discover__ tab, create a new index pattern `gateway` to fetch the data from Elasticsearch. Once configured, navigate back to the __Discover__ tab and you should see a log generated, similar to the following:
```json
{
"_index": "gateway",
"_id": "CE-JL5QBOkdYRG7kEjTJ",
"_version": 1,
"_score": 1,
"_source": {
"request": {
"headers": {
"host": "127.0.0.1:9080",
"accept": "*/*",
"user-agent": "curl/8.6.0"
},
"size": 85,
"querystring": {},
"method": "GET",
"url": "http://127.0.0.1:9080/anything",
"uri": "/anything"
},
"response": {
"headers": {
"content-type": "application/json",
"access-control-allow-credentials": "true",
"server": "APISIX/3.11.0",
"content-length": "390",
"access-control-allow-origin": "*",
"connection": "close",
"date": "Mon, 13 Jan 2025 10:18:14 GMT"
},
"status": 200,
"size": 618
},
"route_id": "elasticsearch-logger-route",
"latency": 585.00003814697,
"apisix_latency": 18.000038146973,
"upstream_latency": 567,
"upstream": "50.19.58.113:80",
"server": {
"hostname": "0b9a772e68f8",
"version": "3.11.0"
},
"service_id": "",
"client_ip": "192.168.65.1"
},
"fields": {
...
}
}
```
### Log Request and Response Headers With Plugin Metadata
The following example demonstrates how you can customize log format using [Plugin Metadata](../terminology/plugin-metadata.md) and [NGINX variables](http://nginx.org/en/docs/varindex.html) to log specific headers from request and response.
In APISIX, [Plugin Metadata](../terminology/plugin-metadata.md) is used to configure the common metadata fields of all Plugin instances of the same plugin. It is useful when a Plugin is enabled across multiple resources and requires a universal update to their metadata fields.
First, create a Route with `elasticsearch-logger` as follows:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "elasticsearch-logger-route",
"uri": "/anything",
"plugins": {
"elasticsearch-logger": {
"endpoint_addrs": ["http://elasticsearch:9200"],
"field": {
"index": "gateway"
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Next, configure the Plugin metadata for `elasticsearch-logger`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/plugin_metadata/elasticsearch-logger" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr",
"env": "$http_env",
"resp_content_type": "$sent_http_Content_Type"
}
}'
```
Send a request to the Route with the `env` header:
```shell
curl -i "http://127.0.0.1:9080/anything" -H "env: dev"
```
You should receive an `HTTP/1.1 200 OK` response.
Navigate to the Kibana dashboard on [localhost:5601](http://localhost:5601) and under __Discover__ tab, create a new index pattern `gateway` to fetch the data from Elasticsearch, if you have not done so already. Once configured, navigate back to the __Discover__ tab and you should see a log generated, similar to the following:
```json
{
"_index": "gateway",
"_id": "Ck-WL5QBOkdYRG7kODS0",
"_version": 1,
"_score": 1,
"_source": {
"client_ip": "192.168.65.1",
"route_id": "elasticsearch-logger-route",
"@timestamp": "2025-01-06T10:32:36+00:00",
"host": "127.0.0.1",
"resp_content_type": "application/json"
},
"fields": {
...
}
}
```
### Log Request Bodies Conditionally
The following example demonstrates how you can conditionally log request body.
Create a Route with `elasticsearch-logger` to only log request body if the URL query string `log_body` is `true`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"plugins": {
"elasticsearch-logger": {
"endpoint_addrs": ["http://elasticsearch:9200"],
"field": {
"index": "gateway"
},
"include_req_body": true,
"include_req_body_expr": [["arg_log_body", "==", "yes"]]
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
},
"uri": "/anything",
"id": "elasticsearch-logger-route"
}'
```
Send a request to the Route with an URL query string satisfying the condition:
```shell
curl -i "http://127.0.0.1:9080/anything?log_body=yes" -X POST -d '{"env": "dev"}'
```
You should receive an `HTTP/1.1 200 OK` response.
Navigate to the Kibana dashboard on [localhost:5601](http://localhost:5601) and under __Discover__ tab, create a new index pattern `gateway` to fetch the data from Elasticsearch, if you have not done so already. Once configured, navigate back to the __Discover__ tab and you should see a log generated, similar to the following:
```json
{
"_index": "gateway",
"_id": "Dk-cL5QBOkdYRG7k7DSW",
"_version": 1,
"_score": 1,
"_source": {
"request": {
"headers": {
"user-agent": "curl/8.6.0",
"accept": "*/*",
"content-length": "14",
"host": "127.0.0.1:9080",
"content-type": "application/x-www-form-urlencoded"
},
"size": 182,
"querystring": {
"log_body": "yes"
},
"body": "{\"env\": \"dev\"}",
"method": "POST",
"url": "http://127.0.0.1:9080/anything?log_body=yes",
"uri": "/anything?log_body=yes"
},
"start_time": 1735965595203,
"response": {
"headers": {
"content-type": "application/json",
"server": "APISIX/3.11.0",
"access-control-allow-credentials": "true",
"content-length": "548",
"access-control-allow-origin": "*",
"connection": "close",
"date": "Mon, 13 Jan 2025 11:02:32 GMT"
},
"status": 200,
"size": 776
},
"route_id": "elasticsearch-logger-route",
"latency": 703.9999961853,
"apisix_latency": 34.999996185303,
"upstream_latency": 669,
"upstream": "34.197.122.172:80",
"server": {
"hostname": "0b9a772e68f8",
"version": "3.11.0"
},
"service_id": "",
"client_ip": "192.168.65.1"
},
"fields": {
...
}
}
```
Send a request to the Route without any URL query string:
```shell
curl -i "http://127.0.0.1:9080/anything" -X POST -d '{"env": "dev"}'
```
Navigate to the Kibana dashboard __Discover__ tab and you should see a log generated, but without the request body:
```json
{
"_index": "gateway",
"_id": "EU-eL5QBOkdYRG7kUDST",
"_version": 1,
"_score": 1,
"_source": {
"request": {
"headers": {
"content-type": "application/x-www-form-urlencoded",
"accept": "*/*",
"content-length": "14",
"host": "127.0.0.1:9080",
"user-agent": "curl/8.6.0"
},
"size": 169,
"querystring": {},
"method": "POST",
"url": "http://127.0.0.1:9080/anything",
"uri": "/anything"
},
"start_time": 1735965686363,
"response": {
"headers": {
"content-type": "application/json",
"access-control-allow-credentials": "true",
"server": "APISIX/3.11.0",
"content-length": "510",
"access-control-allow-origin": "*",
"connection": "close",
"date": "Mon, 13 Jan 2025 11:15:54 GMT"
},
"status": 200,
"size": 738
},
"route_id": "elasticsearch-logger-route",
"latency": 680.99999427795,
"apisix_latency": 4.9999942779541,
"upstream_latency": 676,
"upstream": "34.197.122.172:80",
"server": {
"hostname": "0b9a772e68f8",
"version": "3.11.0"
},
"service_id": "",
"client_ip": "192.168.65.1"
},
"fields": {
...
}
}
```
:::info
If you have customized the `log_format` in addition to setting `include_req_body` or `include_resp_body` to `true`, the Plugin would not include the bodies in the logs.
As a workaround, you may be able to use the NGINX variable `$request_body` in the log format, such as:
```json
{
"elasticsearch-logger": {
...,
"log_format": {"body": "$request_body"}
}
}
```
:::

View File

@@ -0,0 +1,181 @@
---
title: error-log-logger
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Error log logger
description: This document contains information about the Apache APISIX error-log-logger Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `error-log-logger` Plugin is used to push APISIX's error logs (`error.log`) to TCP, [Apache SkyWalking](https://skywalking.apache.org/), Apache Kafka or ClickHouse servers. You can also set the error log level to send the logs to server.
It might take some time to receive the log data. It will be automatically sent after the timer function in the [batch processor](../batch-processor.md) expires.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|----------------------------------|---------|----------|--------------------------------|-----------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
| tcp.host | string | True | | | IP address or the hostname of the TCP server. |
| tcp.port | integer | True | | [0,...] | Target upstream port. |
| tcp.tls | boolean | False | false | | When set to `true` performs SSL verification. |
| tcp.tls_server_name | string | False | | | Server name for the new TLS extension SNI. |
| skywalking.endpoint_addr | string | False | http://127.0.0.1:12900/v3/logs | | Apache SkyWalking HTTP endpoint. |
| skywalking.service_name | string | False | APISIX | | Service name for the SkyWalking reporter. |
| skywalking.service_instance_name | String | False | APISIX Instance Name | | Service instance name for the SkyWalking reporter. Set it to `$hostname` to directly get the local hostname. |
| clickhouse.endpoint_addr | String | False | http://127.0.0.1:8213 | | ClickHouse endpoint. |
| clickhouse.user | String | False | default | | ClickHouse username. |
| clickhouse.password | String | False | | | ClickHouse password. |
| clickhouse.database | String | False | | | Name of the database to store the logs. |
| clickhouse.logtable | String | False | | | Table name to store the logs. |
| kafka.brokers | array | True | | | List of Kafka brokers (nodes). |
| kafka.brokers.host | string | True | | | The host of Kafka broker, e.g, `192.168.1.1`. |
| kafka.brokers.port | integer | True | | [0, 65535] | The port of Kafka broker |
| kafka.brokers.sasl_config | object | False | | | The sasl config of Kafka broker |
| kafka.brokers.sasl_config.mechanism | string | False | "PLAIN" | ["PLAIN"] | The mechaism of sasl config |
| kafka.brokers.sasl_config.user | string | True | | | The user of sasl_config. If sasl_config exists, it's required. |
| kafka.brokers.sasl_config.password | string | True | | | The password of sasl_config. If sasl_config exists, it's required. |
| kafka.kafka_topic | string | True | | | Target topic to push the logs for organisation. |
| kafka.producer_type | string | False | async | ["async", "sync"] | Message sending mode of the producer. |
| kafka.required_acks | integer | False | 1 | [0, 1, -1] | Number of acknowledgements the leader needs to receive for the producer to consider the request complete. This controls the durability of the sent records. The attribute follows the same configuration as the Kafka `acks` attribute. See [Apache Kafka documentation](https://kafka.apache.org/documentation/#producerconfigs_acks) for more. |
| kafka.key | string | False | | | Key used for allocating partitions for messages. |
| kafka.cluster_name | integer | False | 1 | [0,...] | Name of the cluster. Used when there are two or more Kafka clusters. Only works if the `producer_type` attribute is set to `async`. |
| kafka.meta_refresh_interval | integer | False | 30 | [1,...] | `refresh_interval` parameter in [lua-resty-kafka](https://github.com/doujiang24/lua-resty-kafka) specifies the time to auto refresh the metadata, in seconds.|
| timeout | integer | False | 3 | [1,...] | Timeout (in seconds) for the upstream to connect and send data. |
| keepalive | integer | False | 30 | [1,...] | Time in seconds to keep the connection alive after sending data. |
| level | string | False | WARN | ["STDERR", "EMERG", "ALERT", "CRIT", "ERR", "ERROR", "WARN", "NOTICE", "INFO", "DEBUG"] | Log level to filter the error logs. `ERR` is same as `ERROR`. |
NOTE: `encrypt_fields = {"clickhouse.password"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
### Example of default log format
```text
["2024/01/06 16:04:30 [warn] 11786#9692271: *1 [lua] plugin.lua:205: load(): new plugins: {"error-log-logger":true}, context: init_worker_by_lua*","\n","2024/01/06 16:04:30 [warn] 11786#9692271: *1 [lua] plugin.lua:255: load_stream(): new plugins: {"limit-conn":true,"ip-restriction":true,"syslog":true,"mqtt-proxy":true}, context: init_worker_by_lua*","\n"]
```
## Enable Plugin
To enable the Plugin, you can add it in your configuration file (`conf/config.yaml`):
```yaml title="conf/config.yaml"
plugins:
- request-id
- hmac-auth
- api-breaker
- error-log-logger
```
Once you have enabled the Plugin, you can configure it through the Plugin metadata.
### Configuring TCP server address
You can set the TCP server address by configuring the Plugin metadata as shown below:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/error-log-logger -H "X-API-KEY: $admin_key" -X PUT -d '
{
"tcp": {
"host": "127.0.0.1",
"port": 1999
},
"inactive_timeout": 1
}'
```
### Configuring SkyWalking OAP server address
You can configure the SkyWalking OAP server address as shown below:
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/error-log-logger -H "X-API-KEY: $admin_key" -X PUT -d '
{
"skywalking": {
"endpoint_addr":"http://127.0.0.1:12800/v3/logs"
},
"inactive_timeout": 1
}'
```
### Configuring ClickHouse server details
The Plugin sends the error log as a string to the `data` field of a table in your ClickHouse server.
You can configure it as shown below:
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/error-log-logger -H "X-API-KEY: $admin_key" -X PUT -d '
{
"clickhouse": {
"user": "default",
"password": "a",
"database": "error_log",
"logtable": "t",
"endpoint_addr": "http://127.0.0.1:8123"
}
}'
```
### Configuring Kafka server
The Plugin sends the error log to Kafka, you can configure it as shown below:
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/error-log-logger \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"kafka":{
"brokers":[
{
"host":"127.0.0.1",
"port":9092
}
],
"kafka_topic":"test2"
},
"level":"ERROR",
"inactive_timeout":1
}'
```
## Delete Plugin
To remove the Plugin, you can remove it from your configuration file (`conf/config.yaml`):
```yaml title="conf/config.yaml"
plugins:
- request-id
- hmac-auth
- api-breaker
# - error-log-logger
```

View File

@@ -0,0 +1,33 @@
---
title: ext-plugin-post-req
keywords:
- Apache APISIX
- Plugin
- ext-plugin-post-req
description: This document contains information about the Apache APISIX ext-plugin-post-req Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
`ext-plugin-post-req` differs from the [ext-plugin-pre-req](./ext-plugin-pre-req.md) Plugin in that it runs after executing the built-in Lua Plugins and before proxying to the Upstream.
You can learn more about the configuration from the [ext-plugin-pre-req](./ext-plugin-pre-req.md) Plugin document.

View File

@@ -0,0 +1,111 @@
---
title: ext-plugin-post-resp
keywords:
- Apache APISIX
- API Gateway
- Plugin
- ext-plugin-post-resp
description: This document contains information about the Apache APISIX ext-plugin-post-resp Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `ext-plugin-post-resp` Plugin is for running specific external Plugins in the Plugin Runner before executing the built-in Lua Plugins.
The `ext-plugin-post-resp` plugin will be executed after the request gets a response from the upstream.
This plugin uses [lua-resty-http](https://github.com/api7/lua-resty-http) library under the hood to send requests to the upstream, due to which the [proxy-control](./proxy-control.md), [proxy-mirror](./proxy-mirror.md), and [proxy-cache](./proxy-cache.md) plugins are not available to be used alongside this plugin. Also, [mTLS Between APISIX and Upstream](../mtls.md#mtls-between-apisix-and-upstream) is not yet supported.
See [External Plugin](../external-plugin.md) to learn more.
:::note
Execution of External Plugins will affect the response of the current request.
:::
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|-------------------|---------|----------|---------|-----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|
| conf | array | False | | [{"name": "ext-plugin-A", "value": "{\"enable\":\"feature\"}"}] | List of Plugins and their configurations to be executed on the Plugin Runner. |
| allow_degradation | boolean | False | false | | Sets Plugin degradation when the Plugin Runner is not available. When set to `true`, requests are allowed to continue. |
## Enable Plugin
The example below enables the `ext-plugin-post-resp` Plugin on a specific Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"plugins": {
"ext-plugin-post-resp": {
"conf" : [
{"name": "ext-plugin-A", "value": "{\"enable\":\"feature\"}"}
]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```
## Example usage
Once you have configured the External Plugin as shown above, you can make a request to execute the Plugin:
```shell
curl -i http://127.0.0.1:9080/index.html
```
This will reach the configured Plugin Runner and the `ext-plugin-A` will be executed.
## Delete Plugin
To remove the `ext-plugin-post-resp` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,107 @@
---
title: ext-plugin-pre-req
keywords:
- Apache APISIX
- API Gateway
- Plugin
- ext-plugin-pre-req
description: This document contains information about the Apache APISIX ext-plugin-pre-req Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `ext-plugin-pre-req` Plugin is for running specific external Plugins in the Plugin Runner before executing the built-in Lua Plugins.
See [External Plugin](../external-plugin.md) to learn more.
:::note
Execution of External Plugins will affect the behavior of the current request.
:::
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|-------------------|---------|----------|---------|-----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|
| conf | array | False | | [{"name": "ext-plugin-A", "value": "{\"enable\":\"feature\"}"}] | List of Plugins and their configurations to be executed on the Plugin Runner. |
| allow_degradation | boolean | False | false | | Sets Plugin degradation when the Plugin Runner is not available. When set to `true`, requests are allowed to continue. |
## Enable Plugin
The example below enables the `ext-plugin-pre-req` Plugin on a specific Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"plugins": {
"ext-plugin-pre-req": {
"conf" : [
{"name": "ext-plugin-A", "value": "{\"enable\":\"feature\"}"}
]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```
## Example usage
Once you have configured the External Plugin as shown above, you can make a request to execute the Plugin:
```shell
curl -i http://127.0.0.1:9080/index.html
```
This will reach the configured Plugin Runner and the `ext-plugin-A` will be executed.
## Delete Plugin
To remove the `ext-plugin-pre-req` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,293 @@
---
title: fault-injection
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Fault Injection
- fault-injection
description: This document contains information about the Apache APISIX fault-injection Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `fault-injection` Plugin can be used to test the resiliency of your application. This Plugin will be executed before the other configured Plugins.
The `abort` attribute will directly return the specified HTTP code to the client and skips executing the subsequent Plugins.
The `delay` attribute delays a request and executes the subsequent Plugins.
## Attributes
| Name | Type | Requirement | Default | Valid | Description |
|-------------------|---------|-------------|---------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| abort.http_status | integer | required | | [200, ...] | HTTP status code of the response to return to the client. |
| abort.body | string | optional | | | Body of the response returned to the client. Nginx variables like `client addr: $remote_addr\n` can be used in the body. |
| abort.headers | object | optional | | | Headers of the response returned to the client. The values in the header can contain Nginx variables like `$remote_addr`. |
| abort.percentage | integer | optional | | [0, 100] | Percentage of requests to be aborted. |
| abort.vars | array[] | optional | | | Rules which are matched before executing fault injection. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for a list of available expressions. |
| delay.duration | number | required | | | Duration of the delay. Can be decimal. |
| delay.percentage | integer | optional | | [0, 100] | Percentage of requests to be delayed. |
| delay.vars | array[] | optional | | | Rules which are matched before executing fault injection. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for a list of available expressions. |
:::info IMPORTANT
To use the `fault-injection` Plugin one of `abort` or `delay` must be specified.
:::
:::tip
`vars` can have expressions from [lua-resty-expr](https://github.com/api7/lua-resty-expr) and can flexibly implement AND/OR relationship between rules. For example:
```json
[
[
[ "arg_name","==","jack" ],
[ "arg_age","==",18 ]
],
[
[ "arg_name2","==","allen" ]
]
]
```
This means that the relationship between the first two expressions is AND, and the relationship between them and the third expression is OR.
:::
## Enable Plugin
You can enable the `fault-injection` Plugin on a specific Route as shown below:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"fault-injection": {
"abort": {
"http_status": 200,
"body": "Fault Injection!"
}
}
},
"upstream": {
"nodes": {
"127.0.0.1:1980": 1
},
"type": "roundrobin"
},
"uri": "/hello"
}'
```
Similarly, to enable a `delay` fault:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"fault-injection": {
"delay": {
"duration": 3
}
}
},
"upstream": {
"nodes": {
"127.0.0.1:1980": 1
},
"type": "roundrobin"
},
"uri": "/hello"
}'
```
You can also enable the Plugin with both `abort` and `delay` which can have `vars` for matching:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"fault-injection": {
"abort": {
"http_status": 403,
"body": "Fault Injection!\n",
"vars": [
[
[ "arg_name","==","jack" ]
]
]
},
"delay": {
"duration": 2,
"vars": [
[
[ "http_age","==","18" ]
]
]
}
}
},
"upstream": {
"nodes": {
"127.0.0.1:1980": 1
},
"type": "roundrobin"
},
"uri": "/hello"
}'
```
## Example usage
Once you have enabled the Plugin as shown above, you can make a request to the configured Route:
```shell
curl http://127.0.0.1:9080/hello -i
```
```
HTTP/1.1 200 OK
Date: Mon, 13 Jan 2020 13:50:04 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Server: APISIX web server
Fault Injection!
```
And if we configure the `delay` fault:
```shell
time curl http://127.0.0.1:9080/hello -i
```
```
HTTP/1.1 200 OK
Content-Type: application/octet-stream
Content-Length: 6
Connection: keep-alive
Server: APISIX web server
Date: Tue, 14 Jan 2020 14:30:54 GMT
Last-Modified: Sat, 11 Jan 2020 12:46:21 GMT
hello
real 0m3.034s
user 0m0.007s
sys 0m0.010s
```
### Fault injection with criteria matching
You can enable the `fault-injection` Plugin with the `vars` attribute to set specific rules:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"fault-injection": {
"abort": {
"http_status": 403,
"body": "Fault Injection!\n",
"vars": [
[
[ "arg_name","==","jack" ]
]
]
}
}
},
"upstream": {
"nodes": {
"127.0.0.1:1980": 1
},
"type": "roundrobin"
},
"uri": "/hello"
}'
```
Now, we can test the Route. First, we test with a different `name` argument:
```shell
curl "http://127.0.0.1:9080/hello?name=allen" -i
```
You will get the expected response without the fault injected:
```
HTTP/1.1 200 OK
Content-Type: application/octet-stream
Transfer-Encoding: chunked
Connection: keep-alive
Date: Wed, 20 Jan 2021 07:21:57 GMT
Server: APISIX/2.2
hello
```
Now if we set the `name` to match our configuration, the `fault-injection` Plugin is executed:
```shell
curl "http://127.0.0.1:9080/hello?name=jack" -i
```
```
HTTP/1.1 403 Forbidden
Date: Wed, 20 Jan 2021 07:23:37 GMT
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: APISIX/2.2
Fault Injection!
```
## Delete Plugin
To remove the `fault-injection` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,226 @@
---
title: file-logger
keywords:
- Apache APISIX
- API Gateway
- Plugin
- File Logger
description: This document contains information about the Apache APISIX file-logger Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `file-logger` Plugin is used to push log streams to a specific location.
:::tip
- `file-logger` plugin can count request and response data for individual routes locally, which is useful for [debugging](../debug-mode.md).
- `file-logger` plugin can get [APISIX variables](../apisix-variable.md) and [NGINX variables](http://nginx.org/en/docs/varindex.html), while `access.log` can only use NGINX variables.
- `file-logger` plugin support hot-loaded so that we can change its configuration at any time with immediate effect.
- `file-logger` plugin saves every data in JSON format.
- The user can modify the functions executed by the `file-logger` during the `log phase` to collect the information they want.
:::
## Attributes
| Name | Type | Required | Description |
| ---- | ------ | -------- | ------------- |
| path | string | True | Log file path. |
| log_format | object | False | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
| include_req_body | boolean | False | When set to `true` includes the request body in the log. If the request body is too big to be kept in the memory, it can't be logged due to Nginx's limitations. |
| include_req_body_expr | array | False | Filter for when the `include_req_body` attribute is set to `true`. Request body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
| include_resp_body | boolean | False | When set to `true` includes the response body in the log file. |
| include_resp_body_expr | array | False | When the `include_resp_body` attribute is set to `true`, use this to filter based on [lua-resty-expr](https://github.com/api7/lua-resty-expr). If present, only logs the response into file if the expression evaluates to `true`. |
| match | array[array] | False | Logs will be recorded when the rule matching is successful if the option is set. See [lua-resty-expr](https://github.com/api7/lua-resty-expr#operator-list) for a list of available expressions. |
### Example of default log format
```json
{
"service_id": "",
"apisix_latency": 100.99999809265,
"start_time": 1703907485819,
"latency": 101.99999809265,
"upstream_latency": 1,
"client_ip": "127.0.0.1",
"route_id": "1",
"server": {
"version": "3.7.0",
"hostname": "localhost"
},
"request": {
"headers": {
"host": "127.0.0.1:1984",
"content-type": "application/x-www-form-urlencoded",
"user-agent": "lua-resty-http/0.16.1 (Lua) ngx_lua/10025",
"content-length": "12"
},
"method": "POST",
"size": 194,
"url": "http://127.0.0.1:1984/hello?log_body=no",
"uri": "/hello?log_body=no",
"querystring": {
"log_body": "no"
}
},
"response": {
"headers": {
"content-type": "text/plain",
"connection": "close",
"content-length": "12",
"server": "APISIX/3.7.0"
},
"status": 200,
"size": 123
},
"upstream": "127.0.0.1:1982"
}
```
## Metadata
You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
| Name | Type | Required | Default | Description |
| ---------- | ------ | -------- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log_format | object | False | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
The example below shows how you can configure through the Admin API:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/file-logger -H "X-API-KEY: $admin_key" -X PUT -d '
{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr"
}
}'
```
With this configuration, your logs would be formatted as shown below:
```shell
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
```
## Enable Plugin
The example below shows how you can enable the Plugin on a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"file-logger": {
"path": "logs/file.log"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:9001": 1
}
},
"uri": "/hello"
}'
```
## Example usage
Now, if you make a request, it will be logged in the path you specified:
```shell
curl -i http://127.0.0.1:9080/hello
```
You will be able to find the `file.log` file in the configured `logs` directory.
## Filter logs
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"file-logger": {
"path": "logs/file.log",
"match": [
[
[ "arg_name","==","jack" ]
]
]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:9001": 1
}
},
"uri": "/hello"
}'
```
Test:
```shell
curl -i http://127.0.0.1:9080/hello?name=jack
```
Log records can be seen in `logs/file.log`.
```shell
curl -i http://127.0.0.1:9080/hello?name=rose
```
Log records cannot be seen in `logs/file.log`.
## Delete Plugin
To remove the `file-logger` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:9001": 1
}
}
}'
```

View File

@@ -0,0 +1,186 @@
---
title: forward-auth
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Forward Authentication
- forward-auth
description: This document contains information about the Apache APISIX forward-auth Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `forward-auth` Plugin implements a classic external authentication model. When authentication fails, you can have a custom error message or redirect the user to an authentication page.
This Plugin moves the authentication and authorization logic to a dedicated external service. APISIX forwards the user's requests to the external service, blocks the original request, and replaces the result when the external service responds with a non 2xx status code.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
| ----------------- | ------------- | -------- | ------- | -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| uri | string | True | | | URI of the authorization service. |
| ssl_verify | boolean | False | true | | When set to `true`, verifies the SSL certificate. |
| request_method | string | False | GET | ["GET","POST"] | HTTP method for a client to send requests to the authorization service. When set to `POST` the request body is send to the authorization service. |
| request_headers | array[string] | False | | | Client request headers to be sent to the authorization service. If not set, only the headers provided by APISIX are sent (for example, `X-Forwarded-XXX`). |
| upstream_headers | array[string] | False | | | Authorization service response headers to be forwarded to the Upstream. If not set, no headers are forwarded to the Upstream service. |
| client_headers | array[string] | False | | | Authorization service response headers to be sent to the client when authorization fails. If not set, no headers will be sent to the client. |
| timeout | integer | False | 3000ms | [1, 60000]ms | Timeout for the authorization service HTTP call. |
| keepalive | boolean | False | true | | When set to `true`, keeps the connection alive for multiple requests. |
| keepalive_timeout | integer | False | 60000ms | [1000, ...]ms | Idle time after which the connection is closed. |
| keepalive_pool | integer | False | 5 | [1, ...]ms | Connection pool limit. |
| allow_degradation | boolean | False | false | | When set to `true`, allows authentication to be skipped when authentication server is unavailable. |
| status_on_error | integer | False | 403 | [200,...,599] | Sets the HTTP status that is returned to the client when there is a network error to the authorization service. The default status is “403” (HTTP Forbidden). |
## Data definition
APISIX will generate and send the request headers listed below to the authorization service:
| Scheme | HTTP Method | Host | URI | Source IP |
| ----------------- | ------------------ | ---------------- | --------------- | --------------- |
| X-Forwarded-Proto | X-Forwarded-Method | X-Forwarded-Host | X-Forwarded-Uri | X-Forwarded-For |
## Example usage
First, you need to setup your external authorization service. The example below uses Apache APISIX's [serverless](./serverless.md) Plugin to mock the service:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl -X PUT 'http://127.0.0.1:9180/apisix/admin/routes/auth' \
-H "X-API-KEY: $admin_key" \
-H 'Content-Type: application/json' \
-d '{
"uri": "/auth",
"plugins": {
"serverless-pre-function": {
"phase": "rewrite",
"functions": [
"return function (conf, ctx)
local core = require(\"apisix.core\");
local authorization = core.request.header(ctx, \"Authorization\");
if authorization == \"123\" then
core.response.exit(200);
elseif authorization == \"321\" then
core.response.set_header(\"X-User-ID\", \"i-am-user\");
core.response.exit(200);
else core.response.set_header(\"Location\", \"http://example.com/auth\");
core.response.exit(403);
end
end"
]
}
}
}'
```
Now you can configure the `forward-auth` Plugin to a specific Route:
```shell
curl -X PUT 'http://127.0.0.1:9180/apisix/admin/routes/1' \
-H "X-API-KEY: $admin_key" \
-d '{
"uri": "/headers",
"plugins": {
"forward-auth": {
"uri": "http://127.0.0.1:9080/auth",
"request_headers": ["Authorization"],
"upstream_headers": ["X-User-ID"],
"client_headers": ["Location"]
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Now if we send the authorization details in the request header:
```shell
curl http://127.0.0.1:9080/headers -H 'Authorization: 123'
```
```
{
"headers": {
"Authorization": "123",
"Next": "More-headers"
}
}
```
The authorization service response can also be forwarded to the Upstream:
```shell
curl http://127.0.0.1:9080/headers -H 'Authorization: 321'
```
```
{
"headers": {
"Authorization": "321",
"X-User-ID": "i-am-user",
"Next": "More-headers"
}
}
```
When authorization fails, the authorization service can send custom response back to the user:
```shell
curl -i http://127.0.0.1:9080/headers
```
```
HTTP/1.1 403 Forbidden
Location: http://example.com/auth
```
## Delete Plugin
To remove the `forward-auth` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,31 @@
---
title: GM
keywords:
- Apache APISIX
- Plugin
- GM
description: This article introduces the basic information and usage of the Apache APISIX `gm` plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
:::info
The function usage scenarios introduced in this article are mainly in China, so this article only has a Chinese version temporarily. You can click [here](https://apisix.apache.org/zh/docs/apisix/plugins/gm/) for more details. If you are interested in this feature, welcome to translate this document.
:::

View File

@@ -0,0 +1,223 @@
---
title: google-cloud-logging
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Google Cloud logging
description: This document contains information about the Apache APISIX google-cloud-logging Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `google-cloud-logging` Plugin is used to send APISIX access logs to [Google Cloud Logging Service](https://cloud.google.com/logging/).
This plugin also allows to push logs as a batch to your Google Cloud Logging Service. It might take some time to receive the log data. It will be automatically sent after the timer function in the [batch processor](../batch-processor.md) expires.
## Attributes
| Name | Required | Default | Description |
|-------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| auth_config | True | | Either `auth_config` or `auth_file` must be provided. |
| auth_config.client_email | True | | Email address of the Google Cloud service account. |
| auth_config.private_key | True | | Private key of the Google Cloud service account. |
| auth_config.project_id | True | | Project ID in the Google Cloud service account. |
| auth_config.token_uri | True | https://oauth2.googleapis.com/token | Token URI of the Google Cloud service account. |
| auth_config.entries_uri | False | https://logging.googleapis.com/v2/entries:write | Google Cloud Logging Service API. |
| auth_config.scope | False | ["https://www.googleapis.com/auth/logging.read", "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/logging.admin", "https://www.googleapis.com/auth/cloud-platform"] | Access scopes of the Google Cloud service account. See [OAuth 2.0 Scopes for Google APIs](https://developers.google.com/identity/protocols/oauth2/scopes#logging). |
| auth_config.scopes | Deprecated | ["https://www.googleapis.com/auth/logging.read", "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/logging.admin", "https://www.googleapis.com/auth/cloud-platform"] | Access scopes of the Google Cloud service account. Use `auth_config.scope` instead. |
| auth_file | True | | Path to the Google Cloud service account authentication JSON file. Either `auth_config` or `auth_file` must be provided. |
| ssl_verify | False | true | When set to `true`, enables SSL verification as mentioned in [OpenResty docs](https://github.com/openresty/lua-nginx-module#tcpsocksslhandshake). |
| resource | False | {"type": "global"} | Google monitor resource. See [MonitoredResource](https://cloud.google.com/logging/docs/reference/v2/rest/v2/MonitoredResource) for more details. |
| log_id | False | apisix.apache.org%2Flogs | Google Cloud logging ID. See [LogEntry](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry) for details. |
| log_format | False | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
NOTE: `encrypt_fields = {"auth_config.private_key"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
### Example of default log format
```json
{
"insertId": "0013a6afc9c281ce2e7f413c01892bdc",
"labels": {
"source": "apache-apisix-google-cloud-logging"
},
"logName": "projects/apisix/logs/apisix.apache.org%2Flogs",
"httpRequest": {
"requestMethod": "GET",
"requestUrl": "http://localhost:1984/hello",
"requestSize": 59,
"responseSize": 118,
"status": 200,
"remoteIp": "127.0.0.1",
"serverIp": "127.0.0.1:1980",
"latency": "0.103s"
},
"resource": {
"type": "global"
},
"jsonPayload": {
"service_id": "",
"route_id": "1"
},
"timestamp": "2024-01-06T03:34:45.065Z"
}
```
## Metadata
You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
| Name | Type | Required | Default | Description |
| ---------- | ------ | -------- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log_format | object | False | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
:::info IMPORTANT
Configuring the Plugin metadata is global in scope. This means that it will take effect on all Routes and Services which use the `google-cloud-logging` Plugin.
:::
The example below shows how you can configure through the Admin API:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/google-cloud-logging -H "X-API-KEY: $admin_key" -X PUT -d '
{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr"
}
}'
```
With this configuration, your logs would be formatted as shown below:
```json
{"partialSuccess":false,"entries":[{"jsonPayload":{"client_ip":"127.0.0.1","host":"localhost","@timestamp":"2023-01-09T14:47:25+08:00","route_id":"1"},"resource":{"type":"global"},"insertId":"942e81f60b9157f0d46bc9f5a8f0cc40","logName":"projects/apisix/logs/apisix.apache.org%2Flogs","timestamp":"2023-01-09T14:47:25+08:00","labels":{"source":"apache-apisix-google-cloud-logging"}}]}
```
## Enable Plugin
### Full configuration
The example below shows a complete configuration of the Plugin on a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"google-cloud-logging": {
"auth_config":{
"project_id":"apisix",
"client_email":"your service account email@apisix.iam.gserviceaccount.com",
"private_key":"-----BEGIN RSA PRIVATE KEY-----your private key-----END RSA PRIVATE KEY-----",
"token_uri":"https://oauth2.googleapis.com/token",
"scope":[
"https://www.googleapis.com/auth/logging.admin"
],
"entries_uri":"https://logging.googleapis.com/v2/entries:write"
},
"resource":{
"type":"global"
},
"log_id":"apisix.apache.org%2Flogs",
"inactive_timeout":10,
"max_retry_count":0,
"buffer_duration":60,
"retry_delay":1,
"batch_max_size":1
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"uri": "/hello"
}'
```
### Minimal configuration
The example below shows a bare minimum configuration of the Plugin on a Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"google-cloud-logging": {
"auth_config":{
"project_id":"apisix",
"client_email":"your service account email@apisix.iam.gserviceaccount.com",
"private_key":"-----BEGIN RSA PRIVATE KEY-----your private key-----END RSA PRIVATE KEY-----"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"uri": "/hello"
}'
```
## Example usage
Now, if you make a request to APISIX, it will be logged in your Google Cloud Logging Service.
```shell
curl -i http://127.0.0.1:9080/hello
```
You can then login and view the logs in [Google Cloud Logging Service](https://console.cloud.google.com/logs/viewer).
## Delete Plugin
To remove the `google-cloud-logging` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,391 @@
---
title: grpc-transcode
keywords:
- Apache APISIX
- API Gateway
- Plugin
- gRPC Transcode
- grpc-transcode
description: This document contains information about the Apache APISIX grpc-transcode Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `grpc-transcode` Plugin converts between HTTP and gRPC requests.
APISIX takes in an HTTP request, transcodes it and forwards it to a gRPC service, gets the response and returns it back to the client in HTTP format.
<!-- TODO: use an image here to explain the concept better -->
## Attributes
| Name | Type | Required | Default | Description |
| --------- | ------------------------------------------------------ | -------- | ------- | ------------------------------------ |
| proto_id | string/integer | True | | id of the the proto content. |
| service | string | True | | Name of the gRPC service. |
| method | string | True | | Method name of the gRPC service. |
| deadline | number | False | 0 | Deadline for the gRPC service in ms. |
| pb_option | array[string([pb_option_def](#options-for-pb_option))] | False | | protobuf options. |
| show_status_in_body | boolean | False | false | Whether to display the parsed `grpc-status-details-bin` in the response body |
| status_detail_type | string | False | | The message type corresponding to the [details](https://github.com/googleapis/googleapis/blob/b7cb84f5d42e6dba0fdcc2d8689313f6a8c9d7b9/google/rpc/status.proto#L46) part of `grpc-status-details-bin`, if not specified, this part will not be decoded |
### Options for pb_option
| Type | Valid values |
|-----------------|-------------------------------------------------------------------------------------------|
| enum as result | `enum_as_name`, `enum_as_value` |
| int64 as result | `int64_as_number`, `int64_as_string`, `int64_as_hexstring` |
| default values | `auto_default_values`, `no_default_values`, `use_default_values`, `use_default_metatable` |
| hooks | `enable_hooks`, `disable_hooks` |
## Enable Plugin
Before enabling the Plugin, you have to add the content of your `.proto` or `.pb` files to APISIX.
You can use the `/admin/protos/id` endpoint and add the contents of the file to the `content` field:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/protos/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"content" : "syntax = \"proto3\";
package helloworld;
service Greeter {
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
message HelloRequest {
string name = 1;
}
message HelloReply {
string message = 1;
}"
}'
```
If your proto file contains imports, or if you want to combine multiple proto files, you can generate a `.pb` file and use it in APISIX.
For example, if we have a file called `proto/helloworld.proto` which imports another proto file:
```proto
syntax = "proto3";
package helloworld;
import "proto/import.proto";
...
```
We first generate a `.pb` file from the proto files:
```shell
protoc --include_imports --descriptor_set_out=proto.pb proto/helloworld.proto
```
The output binary file, `proto.pb` will contain both `helloworld.proto` and `import.proto`.
We can now use the content of `proto.pb` in the `content` field of the API request.
As the content of the proto is binary, we encode it in `base64` and configure the content in APISIX:
```shell
curl http://127.0.0.1:9180/apisix/admin/protos/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"content" : "'"$(base64 -w0 /path/to/proto.pb)"'"
}'
```
You should see an `HTTP/1.1 201 Created` response with the following:
```
{"node":{"value":{"create_time":1643879753,"update_time":1643883085,"content":"CmgKEnByb3RvL2ltcG9ydC5wcm90bxIDcGtnIhoKBFVzZXISEgoEbmFtZRgBIAEoCVIEbmFtZSIeCghSZXNwb25zZRISCgRib2R5GAEgASgJUgRib2R5QglaBy4vcHJvdG9iBnByb3RvMwq9AQoPcHJvdG8vc3JjLnByb3RvEgpoZWxsb3dvcmxkGhJwcm90by9pbXBvcnQucHJvdG8iPAoHUmVxdWVzdBIdCgR1c2VyGAEgASgLMgkucGtnLlVzZXJSBHVzZXISEgoEYm9keRgCIAEoCVIEYm9keTI5CgpUZXN0SW1wb3J0EisKA1J1bhITLmhlbGxvd29ybGQuUmVxdWVzdBoNLnBrZy5SZXNwb25zZSIAQglaBy4vcHJvdG9iBnByb3RvMw=="},"key":"\/apisix\/proto\/1"}}
```
Now, we can enable the `grpc-transcode` Plugin to a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/111 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/grpctest",
"plugins": {
"grpc-transcode": {
"proto_id": "1",
"service": "helloworld.Greeter",
"method": "SayHello"
}
},
"upstream": {
"scheme": "grpc",
"type": "roundrobin",
"nodes": {
"127.0.0.1:50051": 1
}
}
}'
```
:::note
The Upstream service used here should be a gRPC service. Note that the `scheme` is set to `grpc`.
You can use the [grpc_server_example](https://github.com/api7/grpc_server_example) for testing.
:::
## Example usage
Once you configured the Plugin as mentioned above, you can make a request to APISIX to get a response back from the gRPC service (through APISIX):
```shell
curl -i http://127.0.0.1:9080/grpctest?name=world
```
Response:
```shell
HTTP/1.1 200 OK
Date: Fri, 16 Aug 2019 11:55:36 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
Server: APISIX web server
Proxy-Connection: keep-alive
{"message":"Hello world"}
```
You can also configure the `pb_option` as shown below:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/23 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/zeebe/WorkflowInstanceCreate",
"plugins": {
"grpc-transcode": {
"proto_id": "1",
"service": "gateway_protocol.Gateway",
"method": "CreateWorkflowInstance",
"pb_option":["int64_as_string"]
}
},
"upstream": {
"scheme": "grpc",
"type": "roundrobin",
"nodes": {
"127.0.0.1:26500": 1
}
}
}'
```
Now if you check the configured Route:
```shell
curl -i "http://127.0.0.1:9080/zeebe/WorkflowInstanceCreate?bpmnProcessId=order-process&version=1&variables=\{\"orderId\":\"7\",\"ordervalue\":99\}"
```
```
HTTP/1.1 200 OK
Date: Wed, 13 Nov 2019 03:38:27 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
grpc-encoding: identity
grpc-accept-encoding: gzip
Server: APISIX web server
Trailer: grpc-status
Trailer: grpc-message
{"workflowKey":"#2251799813685260","workflowInstanceKey":"#2251799813688013","bpmnProcessId":"order-process","version":1}
```
## Show `grpc-status-details-bin` in response body
If the gRPC service returns an error, there may be a `grpc-status-details-bin` field in the response header describing the error, which you can decode and display in the response body.
Upload the proto file
```shell
curl http://127.0.0.1:9180/apisix/admin/protos/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"content" : "syntax = \"proto3\";
package helloworld;
service Greeter {
rpc GetErrResp (HelloRequest) returns (HelloReply) {}
}
message HelloRequest {
string name = 1;
repeated string items = 2;
}
message HelloReply {
string message = 1;
repeated string items = 2;
}"
}'
```
Enable the `grpc-transcode` pluginand set the option `show_status_in_body` to `true`
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/grpctest",
"plugins": {
"grpc-transcode": {
"proto_id": "1",
"service": "helloworld.Greeter",
"method": "GetErrResp",
"show_status_in_body": true
}
},
"upstream": {
"scheme": "grpc",
"type": "roundrobin",
"nodes": {
"127.0.0.1:50051": 1
}
}
}'
```
Access the route configured above
```shell
curl -i http://127.0.0.1:9080/grpctest?name=world
```
Response:
```Shell
HTTP/1.1 503 Service Temporarily Unavailable
Date: Wed, 10 Aug 2022 08:59:46 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
grpc-status: 14
grpc-message: Out of service
grpc-status-details-bin: CA4SDk91dCBvZiBzZXJ2aWNlGlcKKnR5cGUuZ29vZ2xlYXBpcy5jb20vaGVsbG93b3JsZC5FcnJvckRldGFpbBIpCAESHFRoZSBzZXJ2ZXIgaXMgb3V0IG9mIHNlcnZpY2UaB3NlcnZpY2U
Server: APISIX web server
{"error":{"details":[{"type_url":"type.googleapis.com\/helloworld.ErrorDetail","value":"\b\u0001\u0012\u001cThe server is out of service\u001a\u0007service"}],"message":"Out of service","code":14}}
```
Note that there is an undecoded field in the return body. If you need to decode the field, you need to add the `message type` of the field in the uploaded proto file.
```shell
curl http://127.0.0.1:9180/apisix/admin/protos/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"content" : "syntax = \"proto3\";
package helloworld;
service Greeter {
rpc GetErrResp (HelloRequest) returns (HelloReply) {}
}
message HelloRequest {
string name = 1;
repeated string items = 2;
}
message HelloReply {
string message = 1;
repeated string items = 2;
}
message ErrorDetail {
int64 code = 1;
string message = 2;
string type = 3;
}"
}'
```
Also configure the option `status_detail_type` to `helloworld.ErrorDetail`.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/grpctest",
"plugins": {
"grpc-transcode": {
"proto_id": "1",
"service": "helloworld.Greeter",
"method": "GetErrResp",
"show_status_in_body": true,
"status_detail_type": "helloworld.ErrorDetail"
}
},
"upstream": {
"scheme": "grpc",
"type": "roundrobin",
"nodes": {
"127.0.0.1:50051": 1
}
}
}'
```
The fully decoded result is returned.
```Shell
HTTP/1.1 503 Service Temporarily Unavailable
Date: Wed, 10 Aug 2022 09:02:46 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
grpc-status: 14
grpc-message: Out of service
grpc-status-details-bin: CA4SDk91dCBvZiBzZXJ2aWNlGlcKKnR5cGUuZ29vZ2xlYXBpcy5jb20vaGVsbG93b3JsZC5FcnJvckRldGFpbBIpCAESHFRoZSBzZXJ2ZXIgaXMgb3V0IG9mIHNlcnZpY2UaB3NlcnZpY2U
Server: APISIX web server
{"error":{"details":[{"type":"service","message":"The server is out of service","code":1}],"message":"Out of service","code":14}}
```
## Delete Plugin
To remove the `grpc-transcode` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/111 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/grpctest",
"plugins": {},
"upstream": {
"scheme": "grpc",
"type": "roundrobin",
"nodes": {
"127.0.0.1:50051": 1
}
}
}'
```

View File

@@ -0,0 +1,110 @@
---
title: grpc-web
keywords:
- Apache APISIX
- API Gateway
- Plugin
- gRPC Web
- grpc-web
description: This document contains information about the Apache APISIX grpc-web Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `grpc-web` Plugin is a proxy Plugin that can process [gRPC Web](https://github.com/grpc/grpc-web) requests from JavaScript clients to a gRPC service.
## Attributes
| Name | Type | Required | Default | Description |
|-------------------------|---------|----------|-----------------------------------------|----------------------------------------------------------------------------------------------------------|
| cors_allow_headers | string | False | "content-type,x-grpc-web,x-user-agent" | Headers in the request allowed when accessing a cross-origin resource. Use `,` to add multiple headers. |
## Enable Plugin
You can enable the `grpc-web` Plugin on a specific Route as shown below:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri":"/grpc/web/*",
"plugins":{
"grpc-web":{}
},
"upstream":{
"scheme":"grpc",
"type":"roundrobin",
"nodes":{
"127.0.0.1:1980":1
}
}
}'
```
:::info IMPORTANT
While using the `grpc-web` Plugin, always use a prefix matching pattern (`/*`, `/grpc/example/*`) for matching Routes. This is because the gRPC Web client passes the package name, the service interface name, the method name and other information in the proto in the URI. For example, `/path/a6.RouteService/Insert`.
So, when absolute matching is used, the Plugin would not be hit and the information from the proto would not be extracted.
:::
## Example usage
Refer to [gRPC-Web Client Runtime Library](https://www.npmjs.com/package/grpc-web) or [Apache APISIX gRPC Web Test Framework](https://github.com/apache/apisix/tree/master/t/plugin/grpc-web) to learn how to setup your web client.
Once you have your gRPC Web client running, you can make a request to APISIX from the browser or through Node.js.
:::note
The supported request methods are `POST` and `OPTIONS`. See [CORS support](https://github.com/grpc/grpc-web/blob/master/doc/browser-features.md#cors-support).
The supported `Content-Type` includes `application/grpc-web`, `application/grpc-web-text`, `application/grpc-web+proto`, and `application/grpc-web-text+proto`. See [Protocol differences vs gRPC over HTTP2](https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-WEB.md#protocol-differences-vs-grpc-over-http2).
:::
## Delete Plugin
To remove the `grpc-web` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri":"/grpc/web/*",
"plugins":{},
"upstream":{
"scheme":"grpc",
"type":"roundrobin",
"nodes":{
"127.0.0.1:1980":1
}
}
}'
```

View File

@@ -0,0 +1,123 @@
---
title: gzip
keywords:
- Apache APISIX
- API Gateway
- Plugin
- gzip
description: This document contains information about the Apache APISIX gzip Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `gzip` Plugin dynamically sets the behavior of [gzip in Nginx](https://docs.nginx.com/nginx/admin-guide/web-server/compression/).
When the `gzip` plugin is enabled, the client needs to include `Accept-Encoding: gzip` in the request header to indicate support for gzip compression. Upon receiving the request, APISIX dynamically determines whether to compress the response content based on the client's support and server configuration. If the conditions are met, `APISIX` adds the `Content-Encoding: gzip` header to the response, indicating that the response content has been compressed using gzip. Upon receiving the response, the client uses the corresponding decompression algorithm based on the `Content-Encoding` header to decompress the response content and obtain the original response content.
:::info IMPORTANT
This Plugin requires APISIX to run on [APISIX-Runtime](../FAQ.md#how-do-i-build-the-apisix-runtime-environment).
:::
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|----------------|----------------------|----------|---------------|--------------|-----------------------------------------------------------------------------------------|
| types | array[string] or "*" | False | ["text/html"] | | Dynamically sets the `gzip_types` directive. Special value `"*"` matches any MIME type. |
| min_length | integer | False | 20 | >= 1 | Dynamically sets the `gzip_min_length` directive. |
| comp_level | integer | False | 1 | [1, 9] | Dynamically sets the `gzip_comp_level` directive. |
| http_version | number | False | 1.1 | 1.1, 1.0 | Dynamically sets the `gzip_http_version` directive. |
| buffers.number | integer | False | 32 | >= 1 | Dynamically sets the `gzip_buffers` directive parameter `number`. |
| buffers.size | integer | False | 4096 | >= 1 | Dynamically sets the `gzip_buffers` directive parameter `size`. The unit is in bytes. |
| vary | boolean | False | false | | Dynamically sets the `gzip_vary` directive. |
## Enable Plugin
The example below enables the `gzip` Plugin on the specified Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"plugins": {
"gzip": {
"buffers": {
"number": 8
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```
## Example usage
Once you have configured the Plugin as shown above, you can make a request as shown below:
```shell
curl http://127.0.0.1:9080/index.html -i -H "Accept-Encoding: gzip"
```
```
HTTP/1.1 404 Not Found
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Date: Wed, 21 Jul 2021 03:52:55 GMT
Server: APISIX/2.7
Content-Encoding: gzip
Warning: Binary output can mess up your terminal. Use "--output -" to tell
Warning: curl to output it to your terminal anyway, or consider "--output
Warning: <FILE>" to save to a file.
```
## Delete Plugin
To remove the `gzip` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,760 @@
---
title: hmac-auth
keywords:
- Apache APISIX
- API Gateway
- Plugin
- HMAC Authentication
- hmac-auth
description: The hmac-auth Plugin supports HMAC authentication to ensure request integrity, preventing modifications during transmission and enhancing API security.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `hmac-auth` Plugin supports HMAC (Hash-based Message Authentication Code) authentication as a mechanism to ensure the integrity of requests, preventing them from being modified during transmissions. To use the Plugin, you would configure HMAC secret keys on [Consumers](../terminology/consumer.md) and enable the Plugin on Routes or Services.
When a Consumer is successfully authenticated, APISIX adds additional headers, such as `X-Consumer-Username`, `X-Credential-Indentifier`, and other Consumer custom headers if configured, to the request, before proxying it to the Upstream service. The Upstream service will be able to differentiate between consumers and implement additional logics as needed. If any of these values is not available, the corresponding header will not be added.
Once enabled, the Plugin verifies the HMAC signature in the request's `Authorization` header and check that incoming requests are from trusted sources. Specifically, when APISIX receives an HMAC-signed request, the key ID is extracted from the `Authorization` header. APISIX then retrieves the corresponding Consumer configuration, including the secret key. If the key ID is valid and exists, APISIX generates an HMAC signature using the request's `Date` header and the secret key. If this generated signature matches the signature provided in the `Authorization` header, the request is authenticated and forwarded to Upstream services.
The Plugin implementation is based on [draft-cavage-http-signatures](https://www.ietf.org/archive/id/draft-cavage-http-signatures-12.txt).
## Attributes
The following attributes are available for configurations on Consumers or Credentials.
| Name | Type | Required | Default | Valid values | Description |
|-----------------------|---------------|----------|---------------|---------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| key_id | string | True | | | Unique identifier for the Consumer, which identifies the associated configurations such as the secret key. |
| secret_key | string | True | | | Secret key used to generate an HMAC. This field supports saving the value in Secret Manager using the [APISIX Secret](../terminology/secret.md) resource. |
The following attributes are available for configurations on Routes or Services.
| Name | Type | Required | Default | Valid values | Description |
|-----------------------|---------------|----------|---------------|---------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| allowed_algorithms | array[string] | False | ["hmac-sha1","hmac-sha256","hmac-sha512"] | combination of "hmac-sha1","hmac-sha256",and "hmac-sha512" | The list of HMAC algorithms allowed. |
| clock_skew | integer | False | 300 | >=1 | Maximum allowable time difference in seconds between the client request's timestamp and APISIX server's current time. This helps account for discrepancies in time synchronization between the clients and servers clocks and protect against replay attacks. The timestamp in the Date header (must be in GMT format) will be used for the calculation. |
| signed_headers | array[string] | False | | | The list of HMAC-signed headers that should be included in the client request's HMAC signature. |
| validate_request_body | boolean | False | false | | If true, validate the integrity of the request body to ensure it has not been tampered with during transmission. Specifically, the Plugin creates a SHA-256 base64-encoded digest and compare it to the `Digest` header. If the Digest` header is missing or if the digests do not match, the validation fails. |
| hide_credentials | boolean | False | false | | If true, do not pass the authorization request header to Upstream services. |
| anonymous_consumer | string | False | | | Anonymous Consumer name. If configured, allow anonymous users to bypass the authentication. |
NOTE: `encrypt_fields = {"secret_key"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
## Examples
The examples below demonstrate how you can work with the `hmac-auth` Plugin for different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Implement HMAC Authentication on a Route
The following example demonstrates how to implement HMAC authentications on a route. You will also attach a Consumer custom ID to authenticated request in the `Consumer-Custom-Id` header, which can be used to implement additional logics as needed.
Create a Consumer `john` with a custom ID label:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "john",
"labels": {
"custom_id": "495aec6a"
}
}'
```
Create `hmac-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/john/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-hmac-auth",
"plugins": {
"hmac-auth": {
"key_id": "john-key",
"secret_key": "john-secret-key"
}
}
}'
```
Create a Route with the `hmac-auth` Plugin using its default configurations:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "hmac-auth-route",
"uri": "/get",
"methods": ["GET"],
"plugins": {
"hmac-auth": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Generate a signature. You can use the below Python snippet or other stack of your choice:
```python title="hmac-sig-header-gen.py"
import hmac
import hashlib
import base64
from datetime import datetime, timezone
key_id = "john-key" # key id
secret_key = b"john-secret-key" # secret key
request_method = "GET" # HTTP method
request_path = "/get" # Route URI
algorithm= "hmac-sha256" # can use other algorithms in allowed_algorithms
# get current datetime in GMT
# note: the signature will become invalid after the clock skew (default 300s)
# you can regenerate the signature after it becomes invalid, or increase the clock
# skew to prolong the validity within the advised security boundary
gmt_time = datetime.now(timezone.utc).strftime('%a, %d %b %Y %H:%M:%S GMT')
# construct the signing string (ordered)
# the date and any subsequent custom headers should be lowercased and separated by a
# single space character, i.e. `<key>:<space><value>`
# https://datatracker.ietf.org/doc/html/draft-cavage-http-signatures-12#section-2.1.6
signing_string = (
f"{key_id}\n"
f"{request_method} {request_path}\n"
f"date: {gmt_time}\n"
)
# create signature
signature = hmac.new(secret_key, signing_string.encode('utf-8'), hashlib.sha256).digest()
signature_base64 = base64.b64encode(signature).decode('utf-8')
# construct the request headers
headers = {
"Date": gmt_time,
"Authorization": (
f'Signature keyId="{key_id}",algorithm="{algorithm}",'
f'headers="@request-target date",'
f'signature="{signature_base64}"'
)
}
# print headers
print(headers)
```
Run the script:
```shell
python3 hmac-sig-header-gen.py
```
You should see the request headers printed:
```text
{'Date': 'Fri, 06 Sep 2024 06:41:29 GMT', 'Authorization': 'Signature keyId="john-key",algorithm="hmac-sha256",headers="@request-target date",signature="wWfKQvPDr0wHQ4IHdluB4IzeNZcj0bGJs2wvoCOT5rM="'}
```
Using the headers generated, send a request to the route:
```shell
curl -X GET "http://127.0.0.1:9080/get" \
-H "Date: Fri, 06 Sep 2024 06:41:29 GMT" \
-H 'Authorization: Signature keyId="john-key",algorithm="hmac-sha256",headers="@request-target date",signature="wWfKQvPDr0wHQ4IHdluB4IzeNZcj0bGJs2wvoCOT5rM="'
```
You should see an `HTTP/1.1 200 OK` response similar to the following:
```json
{
"args": {},
"headers": {
"Accept": "*/*",
"Authorization": "Signature keyId=\"john-key\",algorithm=\"hmac-sha256\",headers=\"@request-target date\",signature=\"wWfKQvPDr0wHQ4IHdluB4IzeNZcj0bGJs2wvoCOT5rM=\"",
"Date": "Fri, 06 Sep 2024 06:41:29 GMT",
"Host": "127.0.0.1",
"User-Agent": "curl/8.6.0",
"X-Amzn-Trace-Id": "Root=1-66d96513-2e52d4f35c9b6a2772d667ea",
"X-Consumer-Username": "john",
"X-Credential-Identifier": "cred-john-hmac-auth",
"X-Consumer-Custom-Id": "495aec6a",
"X-Forwarded-Host": "127.0.0.1"
},
"origin": "192.168.65.1, 34.0.34.160",
"url": "http://127.0.0.1/get"
}
```
### Hide Authorization Information From Upstream
As seen the in the [last example](#implement-hmac-authentication-on-a-route), the `Authorization` header passed to the Upstream includes the signature and all other details. This could potentially introduce security risks.
The following example demonstrates how to prevent these information from being sent to the Upstream service.
Update the Plugin configuration to set `hide_credentials` to `true`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/hmac-auth-route" -X PATCH \
-H "X-API-KEY: ${admin_key}" \
-d '{
"plugins": {
"hmac-auth": {
"hide_credentials": true
}
}
}'
```
Send a request to the route:
```shell
curl -X GET "http://127.0.0.1:9080/get" \
-H "Date: Fri, 06 Sep 2024 06:41:29 GMT" \
-H 'Authorization: Signature keyId="john-key",algorithm="hmac-sha256",headers="@request-target date",signature="wWfKQvPDr0wHQ4IHdluB4IzeNZcj0bGJs2wvoCOT5rM="'
```
You should see an `HTTP/1.1 200 OK` response and notice the `Authorization` header is entirely removed:
```json
{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "127.0.0.1",
"User-Agent": "curl/8.6.0",
"X-Amzn-Trace-Id": "Root=1-66d96513-2e52d4f35c9b6a2772d667ea",
"X-Consumer-Username": "john",
"X-Credential-Identifier": "cred-john-hmac-auth",
"X-Forwarded-Host": "127.0.0.1"
},
"origin": "192.168.65.1, 34.0.34.160",
"url": "http://127.0.0.1/get"
}
```
### Enable Body Validation
The following example demonstrates how to enable body validation to ensure the integrity of the request body.
Create a Consumer `john`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "john"
}'
```
Create `hmac-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/john/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-hmac-auth",
"plugins": {
"hmac-auth": {
"key_id": "john-key",
"secret_key": "john-secret-key"
}
}
}'
```
Create a Route with the `hmac-auth` Plugin as such:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "hmac-auth-route",
"uri": "/post",
"methods": ["POST"],
"plugins": {
"hmac-auth": {
"validate_request_body": true
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Generate a signature. You can use the below Python snippet or other stack of your choice:
```python title="hmac-sig-digest-header-gen.py"
import hmac
import hashlib
import base64
from datetime import datetime, timezone
key_id = "john-key" # key id
secret_key = b"john-secret-key" # secret key
request_method = "POST" # HTTP method
request_path = "/post" # Route URI
algorithm= "hmac-sha256" # can use other algorithms in allowed_algorithms
body = '{"name": "world"}' # example request body
# get current datetime in GMT
# note: the signature will become invalid after the clock skew (default 300s).
# you can regenerate the signature after it becomes invalid, or increase the clock
# skew to prolong the validity within the advised security boundary
gmt_time = datetime.now(timezone.utc).strftime('%a, %d %b %Y %H:%M:%S GMT')
# construct the signing string (ordered)
# the date and any subsequent custom headers should be lowercased and separated by a
# single space character, i.e. `<key>:<space><value>`
# https://datatracker.ietf.org/doc/html/draft-cavage-http-signatures-12#section-2.1.6
signing_string = (
f"{key_id}\n"
f"{request_method} {request_path}\n"
f"date: {gmt_time}\n"
)
# create signature
signature = hmac.new(secret_key, signing_string.encode('utf-8'), hashlib.sha256).digest()
signature_base64 = base64.b64encode(signature).decode('utf-8')
# create the SHA-256 digest of the request body and base64 encode it
body_digest = hashlib.sha256(body.encode('utf-8')).digest()
body_digest_base64 = base64.b64encode(body_digest).decode('utf-8')
# construct the request headers
headers = {
"Date": gmt_time,
"Digest": f"SHA-256={body_digest_base64}",
"Authorization": (
f'Signature keyId="{key_id}",algorithm="hmac-sha256",'
f'headers="@request-target date",'
f'signature="{signature_base64}"'
)
}
# print headers
print(headers)
```
Run the script:
```shell
python3 hmac-sig-digest-header-gen.py
```
You should see the request headers printed:
```text
{'Date': 'Fri, 06 Sep 2024 09:16:16 GMT', 'Digest': 'SHA-256=78qzJuLwSpZ8HacsTdFCQJWxzPMOf8bYctRk2ySLpS8=', 'Authorization': 'Signature keyId="john-key",algorithm="hmac-sha256",headers="@request-target date",signature="rjS6NxOBKmzS8CZL05uLiAfE16hXdIpMD/L/HukOTYE="'}
```
Using the headers generated, send a request to the route:
```shell
curl "http://127.0.0.1:9080/post" -X POST \
-H "Date: Fri, 06 Sep 2024 09:16:16 GMT" \
-H "Digest: SHA-256=78qzJuLwSpZ8HacsTdFCQJWxzPMOf8bYctRk2ySLpS8=" \
-H 'Authorization: Signature keyId="john-key",algorithm="hmac-sha256",headers="@request-target date",signature="rjS6NxOBKmzS8CZL05uLiAfE16hXdIpMD/L/HukOTYE="' \
-d '{"name": "world"}'
```
You should see an `HTTP/1.1 200 OK` response similar to the following:
```json
{
"args": {},
"data": "",
"files": {},
"form": {
"{\"name\": \"world\"}": ""
},
"headers": {
"Accept": "*/*",
"Authorization": "Signature keyId=\"john-key\",algorithm=\"hmac-sha256\",headers=\"@request-target date\",signature=\"rjS6NxOBKmzS8CZL05uLiAfE16hXdIpMD/L/HukOTYE=\"",
"Content-Length": "17",
"Content-Type": "application/x-www-form-urlencoded",
"Date": "Fri, 06 Sep 2024 09:16:16 GMT",
"Digest": "SHA-256=78qzJuLwSpZ8HacsTdFCQJWxzPMOf8bYctRk2ySLpS8=",
"Host": "127.0.0.1",
"User-Agent": "curl/8.6.0",
"X-Amzn-Trace-Id": "Root=1-66d978c3-49f929ad5237da5340bbbeb4",
"X-Consumer-Username": "john",
"X-Credential-Identifier": "cred-john-hmac-auth",
"X-Forwarded-Host": "127.0.0.1"
},
"json": null,
"origin": "192.168.65.1, 34.0.34.160",
"url": "http://127.0.0.1/post"
}
```
If you send a request without the digest or with an invalid digest:
```shell
curl "http://127.0.0.1:9080/post" -X POST \
-H "Date: Fri, 06 Sep 2024 09:16:16 GMT" \
-H "Digest: SHA-256=78qzJuLwSpZ8HacsTdFCQJWxzPMOf8bYctRk2ySLpS8=" \
-H 'Authorization: Signature keyId="john-key",algorithm="hmac-sha256",headers="@request-target date",signature="rjS6NxOBKmzS8CZL05uLiAfE16hXdIpMD/L/HukOTYE="' \
-d '{"name": "world"}'
```
You should see an `HTTP/1.1 401 Unauthorized` response with the following message:
```text
{"message":"client request can't be validated"}
```
### Mandate Signed Headers
The following example demonstrates how you can mandate certain headers to be signed in the request's HMAC signature.
Create a Consumer `john`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "john"
}'
```
Create `hmac-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/john/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-hmac-auth",
"plugins": {
"hmac-auth": {
"key_id": "john-key",
"secret_key": "john-secret-key"
}
}
}'
```
Create a Route with the `hmac-auth` Plugin which requires three headers to be present in the HMAC signature:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "hmac-auth-route",
"uri": "/get",
"methods": ["GET"],
"plugins": {
"hmac-auth": {
"signed_headers": ["date","x-custom-header-a","x-custom-header-b"]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Generate a signature. You can use the below Python snippet or other stack of your choice:
```python title="hmac-sig-req-header-gen.py"
import hmac
import hashlib
import base64
from datetime import datetime, timezone
key_id = "john-key" # key id
secret_key = b"john-secret-key" # secret key
request_method = "GET" # HTTP method
request_path = "/get" # Route URI
algorithm= "hmac-sha256" # can use other algorithms in allowed_algorithms
custom_header_a = "hello123" # required custom header
custom_header_b = "world456" # required custom header
# get current datetime in GMT
# note: the signature will become invalid after the clock skew (default 300s)
# you can regenerate the signature after it becomes invalid, or increase the clock
# skew to prolong the validity within the advised security boundary
gmt_time = datetime.now(timezone.utc).strftime('%a, %d %b %Y %H:%M:%S GMT')
# construct the signing string (ordered)
# the date and any subsequent custom headers should be lowercased and separated by a
# single space character, i.e. `<key>:<space><value>`
# https://datatracker.ietf.org/doc/html/draft-cavage-http-signatures-12#section-2.1.6
signing_string = (
f"{key_id}\n"
f"{request_method} {request_path}\n"
f"date: {gmt_time}\n"
f"x-custom-header-a: {custom_header_a}\n"
f"x-custom-header-b: {custom_header_b}\n"
)
# create signature
signature = hmac.new(secret_key, signing_string.encode('utf-8'), hashlib.sha256).digest()
signature_base64 = base64.b64encode(signature).decode('utf-8')
# construct the request headers
headers = {
"Date": gmt_time,
"Authorization": (
f'Signature keyId="{key_id}",algorithm="hmac-sha256",'
f'headers="@request-target date x-custom-header-a x-custom-header-b",'
f'signature="{signature_base64}"'
),
"x-custom-header-a": custom_header_a,
"x-custom-header-b": custom_header_b
}
# print headers
print(headers)
```
Run the script:
```shell
python3 hmac-sig-req-header-gen.py
```
You should see the request headers printed:
```text
{'Date': 'Fri, 06 Sep 2024 09:58:49 GMT', 'Authorization': 'Signature keyId="john-key",algorithm="hmac-sha256",headers="@request-target date x-custom-header-a x-custom-header-b",signature="MwJR8JOhhRLIyaHlJ3Snbrf5hv0XwdeeRiijvX3A3yE="', 'x-custom-header-a': 'hello123', 'x-custom-header-b': 'world456'}
```
Using the headers generated, send a request to the route:
```shell
curl -X GET "http://127.0.0.1:9080/get" \
-H "Date: Fri, 06 Sep 2024 09:58:49 GMT" \
-H 'Authorization: Signature keyId="john-key",algorithm="hmac-sha256",headers="@request-target date x-custom-header-a x-custom-header-b",signature="MwJR8JOhhRLIyaHlJ3Snbrf5hv0XwdeeRiijvX3A3yE="' \
-H "x-custom-header-a: hello123" \
-H "x-custom-header-b: world456"
```
You should see an `HTTP/1.1 200 OK` response similar to the following:
```json
{
"args": {},
"headers": {
"Accept": "*/*",
"Authorization": "Signature keyId=\"john-key\",algorithm=\"hmac-sha256\",headers=\"@request-target date x-custom-header-a x-custom-header-b\",signature=\"MwJR8JOhhRLIyaHlJ3Snbrf5hv0XwdeeRiijvX3A3yE=\"",
"Date": "Fri, 06 Sep 2024 09:58:49 GMT",
"Host": "127.0.0.1",
"User-Agent": "curl/8.6.0",
"X-Amzn-Trace-Id": "Root=1-66d98196-64a58db25ece71c077999ecd",
"X-Consumer-Username": "john",
"X-Credential-Identifier": "cred-john-hmac-auth",
"X-Custom-Header-A": "hello123",
"X-Custom-Header-B": "world456",
"X-Forwarded-Host": "127.0.0.1"
},
"origin": "192.168.65.1, 103.97.2.206",
"url": "http://127.0.0.1/get"
}
```
### Rate Limit with Anonymous Consumer
The following example demonstrates how you can configure different rate limiting policies by regular and anonymous consumers, where the anonymous Consumer does not need to authenticate and has less quotas.
Create a regular Consumer `john` and configure the `limit-count` Plugin to allow for a quota of 3 within a 30-second window:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "john",
"plugins": {
"limit-count": {
"count": 3,
"time_window": 30,
"rejected_code": 429
}
}
}'
```
Create the `hmac-auth` Credential for the Consumer `john`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/john/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-hmac-auth",
"plugins": {
"hmac-auth": {
"key_id": "john-key",
"secret_key": "john-secret-key"
}
}
}'
```
Create an anonymous user `anonymous` and configure the `limit-count` Plugin to allow for a quota of 1 within a 30-second window:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "anonymous",
"plugins": {
"limit-count": {
"count": 1,
"time_window": 30,
"rejected_code": 429
}
}
}'
```
Create a Route and configure the `hmac-auth` Plugin to accept anonymous Consumer `anonymous` from bypassing the authentication:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "hmac-auth-route",
"uri": "/get",
"methods": ["GET"],
"plugins": {
"hmac-auth": {
"anonymous_consumer": "anonymous"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Generate a signature. You can use the below Python snippet or other stack of your choice:
```python title="hmac-sig-header-gen.py"
import hmac
import hashlib
import base64
from datetime import datetime, timezone
key_id = "john-key" # key id
secret_key = b"john-secret-key" # secret key
request_method = "GET" # HTTP method
request_path = "/get" # Route URI
algorithm= "hmac-sha256" # can use other algorithms in allowed_algorithms
# get current datetime in GMT
# note: the signature will become invalid after the clock skew (default 300s)
# you can regenerate the signature after it becomes invalid, or increase the clock
# skew to prolong the validity within the advised security boundary
gmt_time = datetime.now(timezone.utc).strftime('%a, %d %b %Y %H:%M:%S GMT')
# construct the signing string (ordered)
# the date and any subsequent custom headers should be lowercased and separated by a
# single space character, i.e. `<key>:<space><value>`
# https://datatracker.ietf.org/doc/html/draft-cavage-http-signatures-12#section-2.1.6
signing_string = (
f"{key_id}\n"
f"{request_method} {request_path}\n"
f"date: {gmt_time}\n"
)
# create signature
signature = hmac.new(secret_key, signing_string.encode('utf-8'), hashlib.sha256).digest()
signature_base64 = base64.b64encode(signature).decode('utf-8')
# construct the request headers
headers = {
"Date": gmt_time,
"Authorization": (
f'Signature keyId="{key_id}",algorithm="{algorithm}",'
f'headers="@request-target date",'
f'signature="{signature_base64}"'
)
}
# print headers
print(headers)
```
Run the script:
```shell
python3 hmac-sig-header-gen.py
```
You should see the request headers printed:
```text
{'Date': 'Mon, 21 Oct 2024 17:31:18 GMT', 'Authorization': 'Signature keyId="john-key",algorithm="hmac-sha256",headers="@request-target date",signature="ztFfl9w7LmCrIuPjRC/DWSF4gN6Bt8dBBz4y+u1pzt8="'}
```
To verify, send five consecutive requests with the generated headers:
```shell
resp=$(seq 5 | xargs -I{} curl "http://127.0.0.1:9080/anything" -H "Date: Mon, 21 Oct 2024 17:31:18 GMT" -H 'Authorization: Signature keyId="john-key",algorithm="hmac-sha256",headers="@request-target date",signature="ztFfl9w7LmCrIuPjRC/DWSF4gN6Bt8dBBz4y+u1pzt8="' -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200": $count_200, "429": $count_429
```
You should see the following response, showing that out of the 5 requests, 3 requests were successful (status code 200) while the others were rejected (status code 429).
```text
200: 3, 429: 2
```
Send five anonymous requests:
```shell
resp=$(seq 5 | xargs -I{} curl "http://127.0.0.1:9080/anything" -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200": $count_200, "429": $count_429
```
You should see the following response, showing that only one request was successful:
```text
200: 1, 429: 4
```

View File

@@ -0,0 +1,128 @@
---
title: http-dubbo
keywords:
- Apache APISIX
- API Gateway
- Plugin
- http-dubbo
- http to dubbo
- transcode
description: This document contains information about the Apache APISIX http-dubbo Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `http-dubbo` plugin can transcode between http and Dubbo (Note: in
Dubbo 2.x, the serialization type of the upstream service must be fastjson).
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|--------------------------|---------|----------|---------|--------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| service_name | string | True | | | Dubbo service name |
| service_version | string | False | 0.0.0 | | Dubbo service version |
| method | string | True | | | Dubbo service method name |
| params_type_desc | string | True | | | Description of the Dubbo service method signature |
| serialization_header_key | string | False | | | If `serialization_header_key` is set, the plugin will read this request header to determine if the body has already been serialized according to the Dubbo protocol. If the value of this request header is true, the plugin will not modify the body content and will directly consider it as Dubbo request parameters. If it is false, the developer is required to pass parameters in the format of Dubbo's generic invocation, and the plugin will handle serialization. Note: Due to differences in precision between Lua and Java, serialization by the plugin may lead to parameter precision discrepancies. |
| serialized | boolean | False | false | [true, false] | Same as `serialization_header_key`. Priority is lower than `serialization_header_key`. |
| connect_timeout | number | False | 6000 | | Upstream tcp connect timeout |
| read_timeout | number | False | 6000 | | Upstream tcp read_timeout |
| send_timeout | number | False | 6000 | | Upstream tcp send_timeout |
## Enable Plugin
The example below enables the `http-dubbo` Plugin on the specified Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/TestService/testMethod",
"plugins": {
"http-dubbo": {
"method": "testMethod",
"params_type_desc": "Ljava/lang/Long;Ljava/lang/Integer;",
"serialized": true,
"service_name": "com.xxx.xxx.TestService",
"service_version": "0.0.0"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:20880": 1
}
}
}'
```
## Example usage
Once you have configured the Plugin as shown above, you can make a request as shown below:
```shell
curl --location 'http://127.0.0.1:9080/TestService/testMethod' \
--data '1
2'
```
## How to Get `params_type_desc`
```java
Method[] declaredMethods = YourService.class.getDeclaredMethods();
String params_type_desc = ReflectUtils.getDesc(Arrays.stream(declaredMethods).filter(it -> it.getName().equals("yourmethod")).findAny().get().getParameterTypes());
// If there are method overloads, you need to find the method you want to expose.
// ReflectUtils is a Dubbo implementation.
```
## How to Serialize JSON According to Dubbo Protocol
To prevent loss of precision, we recommend using pre-serialized bodies for requests. The serialization rules for Dubbo's
fastjson are as follows:
- Convert each parameter to a JSON string using toJSONString.
- Separate each parameter with a newline character `\n`.
Some languages and libraries may produce unchanged results when calling toJSONString on strings or numbers. In such
cases, you may need to manually handle some special cases. For example:
- The string `abc"` needs to be encoded as `"abc\""`.
- The string `123` needs to be encoded as `"123"`.
Abstract class, parent class, or generic type as input parameter signature, when the input parameter requires a specific
type. Serialization requires writing specific type information.
Refer to [WriteClassName](https://github.com/alibaba/fastjson/wiki/SerializerFeature_cn) for more details.
## Delete Plugin
To remove the `http-dubbo` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration.
APISIX will automatically reload and you do not have to restart for this to take effect.

View File

@@ -0,0 +1,194 @@
---
title: http-logger
keywords:
- Apache APISIX
- API Gateway
- Plugin
- HTTP Logger
description: This document contains information about the Apache APISIX http-logger Plugin. Using this Plugin, you can push APISIX log data to HTTP or HTTPS servers.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `http-logger` Plugin is used to push log data requests to HTTP/HTTPS servers.
This will allow the ability to send log data requests as JSON objects to monitoring tools and other HTTP servers.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
| ---------------------- | ------- | -------- | ------------- | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| uri | string | True | | | URI of the HTTP/HTTPS server. |
| auth_header | string | False | | | Authorization headers if required. |
| timeout | integer | False | 3 | [1,...] | Time to keep the connection alive for after sending a request. |
| log_format | object | False | | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
| include_req_body | boolean | False | false | [false, true] | When set to `true` includes the request body in the log. If the request body is too big to be kept in the memory, it can't be logged due to Nginx's limitations. |
| include_req_body_expr | array | False | | | Filter for when the `include_req_body` attribute is set to `true`. Request body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
| include_resp_body | boolean | False | false | [false, true] | When set to `true` includes the response body in the log. |
| include_resp_body_expr | array | False | | | When the `include_resp_body` attribute is set to `true`, use this to filter based on [lua-resty-expr](https://github.com/api7/lua-resty-expr). If present, only logs the response if the expression evaluates to `true`. |
| concat_method | string | False | "json" | ["json", "new_line"] | Sets how to concatenate logs. When set to `json`, uses `json.encode` for all pending logs and when set to `new_line`, also uses `json.encode` but uses the newline (`\n`) to concatenate lines. |
| ssl_verify | boolean | False | false | [false, true] | When set to `true` verifies the SSL certificate. |
:::note
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
:::
### Example of default log format
```json
{
"service_id": "",
"apisix_latency": 100.99999809265,
"start_time": 1703907485819,
"latency": 101.99999809265,
"upstream_latency": 1,
"client_ip": "127.0.0.1",
"route_id": "1",
"server": {
"version": "3.7.0",
"hostname": "localhost"
},
"request": {
"headers": {
"host": "127.0.0.1:1984",
"content-type": "application/x-www-form-urlencoded",
"user-agent": "lua-resty-http/0.16.1 (Lua) ngx_lua/10025",
"content-length": "12"
},
"method": "POST",
"size": 194,
"url": "http://127.0.0.1:1984/hello?log_body=no",
"uri": "/hello?log_body=no",
"querystring": {
"log_body": "no"
}
},
"response": {
"headers": {
"content-type": "text/plain",
"connection": "close",
"content-length": "12",
"server": "APISIX/3.7.0"
},
"status": 200,
"size": 123
},
"upstream": "127.0.0.1:1982"
}
```
## Metadata
You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
| Name | Type | Required | Default | Description |
| ---------- | ------ | -------- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log_format | object | False | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
:::info IMPORTANT
Configuring the Plugin metadata is global in scope. This means that it will take effect on all Routes and Services which use the `http-logger` Plugin.
:::
The example below shows how you can configure through the Admin API:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/http-logger \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr"
}
}'
```
With this configuration, your logs would be formatted as shown below:
```shell
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
```
## Enable Plugin
The example below shows how you can enable the Plugin on a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"http-logger": {
"uri": "http://mockbin.org/bin/:ID"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"uri": "/hello"
}'
```
As an example the [mockbin](http://mockbin.org/bin/create) server is used for mocking an HTTP server to see the logs produced by APISIX.
## Example usage
Now, if you make a request to APISIX, it will be logged in your mockbin server:
```shell
curl -i http://127.0.0.1:9080/hello
```
## Delete Plugin
To disable this Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,188 @@
---
title: inspect
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Inspect
- Dynamic Lua Debugging
description: This document contains information about the Apache APISIX inspect Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
It's useful to set arbitrary breakpoint in any Lua file to inspect the context information,
e.g. print local variables if some condition satisfied.
In this way, you don't need to modify the source code of your project, and just get diagnose information
on demand, i.e. dynamic logging.
This plugin supports setting breakpoints within both interpretd function and jit compiled function.
The breakpoint could be at any position within the function. The function could be global/local/module/ananymous.
## Features
* Set breakpoint at any position
* Dynamic breakpoint
* customized breakpoint handler
* You could define one-shot breakpoint
* Work for jit compiled function
* If function reference specified, then performance impact is only bound to that function (JIT compiled code will not trigger debug hook, so they would run fast even if hook is enabled)
* If all breakpoints deleted, jit could recover
## Operation Graph
![Operation Graph](https://raw.githubusercontent.com/apache/apisix/master/docs/assets/images/plugin/inspect.png)
## API to define hook in hooks file
### require("apisix.inspect.dbg").set_hook(file, line, func, filter_func)
The breakpoint is specified by `file` (full qualified or short file name) and the `line` number.
The `func` specified the scope (which function or global) of jit cache to flush:
* If the breakpoint is related to a module function or
global function, you should set it that function reference, then only the jit cache of that function would
be flushed, and it would not affect other caches to avoid slowing down other parts of the program.
* If the breakpointis related to local function or anonymous function,
then you have to set it to `nil` (because no way to get function reference), which would flush the whole jit cache of Lua vm.
You attach a `filter_func` function of the breakpoint, the function takes the `info` as argument and returns
true of false to determine whether the breakpoint would be removed. You could setup one-shot breakpoint
at ease.
The `info` is a hash table which contains below keys:
* `finfo`: `debug.getinfo(level, "nSlf")`
* `uv`: upvalues hash table
* `vals`: local variables hash table
## Attributes
| Name | Type | Required | Default | Description |
|--------------------|---------|----------|---------|------------------------------------------------------------------------------------------------|
| delay | integer | False | 3 | Time in seconds specifying how often to check the hooks file. |
| hooks_file | string | False | "/usr/local/apisix/plugin_inspect_hooks.lua" | Lua file to define hooks, which could be a link file. Ensure only administrator could write this file, otherwise it may be a security risk. |
## Enable Plugin
Plugin is enabled by default:
```yaml title="apisix/cli/config.lua"
local _M = {
plugins = {
"inspect",
...
},
plugin_attr = {
inspect = {
delay = 3,
hooks_file = "/usr/local/apisix/plugin_inspect_hooks.lua"
},
...
},
...
}
```
## Example usage
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```bash
# create test route
curl http://127.0.0.1:9180/apisix/admin/routes/test_limit_req -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/get",
"plugins": {
"limit-req": {
"rate": 100,
"burst": 0,
"rejected_code": 503,
"key_type": "var",
"key": "remote_addr"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
# create a hooks file to set a test breakpoint
# Note that the breakpoint is associated with the line number,
# so if the Lua code changes, you need to adjust the line number in the hooks file
cat <<EOF >/usr/local/apisix/example_hooks.lua
local dbg = require "apisix.inspect.dbg"
dbg.set_hook("limit-req.lua", 88, require("apisix.plugins.limit-req").access, function(info)
ngx.log(ngx.INFO, debug.traceback("foo traceback", 3))
ngx.log(ngx.INFO, dbg.getname(info.finfo))
ngx.log(ngx.INFO, "conf_key=", info.vals.conf_key)
return true
end)
--- more breakpoints could be defined via dbg.set_hook()
--- ...
EOF
# enable the hooks file
ln -sf /usr/local/apisix/example_hooks.lua /usr/local/apisix/plugin_inspect_hooks.lua
# check errors.log to confirm the test breakpoint is enabled
2022/09/01 00:55:38 [info] 2754534#2754534: *3700 [lua] init.lua:29: setup_hooks(): set hooks: err=nil, hooks=["limit-req.lua#88"], context: ngx.timer
# access the test route
curl -i http://127.0.0.1:9080/get
# check errors.log to confirm the test breakpoint is triggered
2022/09/01 00:55:52 [info] 2754534#2754534: *4070 [lua] resty_inspect_hooks.lua:4: foo traceback
stack traceback:
/opt/lua-resty-inspect/lib/resty/inspect/dbg.lua:50: in function </opt/lua-resty-inspect/lib/resty/inspect/dbg.lua:17>
/opt/apisix.fork/apisix/plugins/limit-req.lua:88: in function 'phase_func'
/opt/apisix.fork/apisix/plugin.lua:900: in function 'run_plugin'
/opt/apisix.fork/apisix/init.lua:456: in function 'http_access_phase'
access_by_lua(nginx.conf:303):2: in main chunk, client: 127.0.0.1, server: _, request: "GET /get HTTP/1.1", host: "127.0.0.1:9080"
2022/09/01 00:55:52 [info] 2754534#2754534: *4070 [lua] resty_inspect_hooks.lua:5: /opt/apisix.fork/apisix/plugins/limit-req.lua:88 (phase_func), client: 127.0.0.1, server: _, request: "GET /get HTTP/1.1", host: "127.0.0.1:9080"
2022/09/01 00:55:52 [info] 2754534#2754534: *4070 [lua] resty_inspect_hooks.lua:6: conf_key=remote_addr, client: 127.0.0.1, server: _, request: "GET /get HTTP/1.1", host: "127.0.0.1:9080"
```
## Delete Plugin
To remove the `inspect` Plugin, you can remove it from your configuration file (`conf/config.yaml`):
```yaml title="conf/config.yaml"
plugins:
# - inspect
```

View File

@@ -0,0 +1,154 @@
---
title: ip-restriction
keywords:
- Apache APISIX
- API Gateway
- Plugin
- IP restriction
- ip-restriction
description: The ip-restriction Plugin supports restricting access to upstream resources by IP addresses, through either configuring a whitelist or blacklist of IP addresses.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/ip-restriction" />
</head>
## Description
The `ip-restriction` Plugin supports restricting access to upstream resources by IP addresses, through either configuring a whitelist or blacklist of IP addresses. Restricting IP to resources helps prevent unauthorized access and harden API security.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|---------------|---------------|----------|----------------------------------|--------------|------------------------------------------------------------------------|
| whitelist | array[string] | False | | | List of IPs or CIDR ranges to whitelist. |
| blacklist | array[string] | False | | | List of IPs or CIDR ranges to blacklist. |
| message | string | False | "Your IP address is not allowed" | [1, 1024] | Message returned when the IP address is not allowed access. |
| response_code | integer | False | 403 | [403, 404] | HTTP response code returned when the IP address is not allowed access. |
:::note
At least one of the `whitelist` or `blacklist` should be configured, but they cannot be configured at the same time.
:::
## Examples
The examples below demonstrate how you can configure the `ip-restriction` Plugin for different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Restrict Access by Whitelisting
The following example demonstrates how you can whitelist a list of IP addresses that should have access to the upstream resource and customize the error message for access denial.
Create a Route with the `ip-restriction` Plugin to whitelist a range of IPs and customize the error message when the access is denied:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "ip-restriction-route",
"uri": "/anything",
"plugins": {
"ip-restriction": {
"whitelist": [
"192.168.0.1/24"
],
"message": "Access denied"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
If your IP is allowed, you should receive an `HTTP/1.1 200 OK` response. If not, you should receive an `HTTP/1.1 403 Forbidden` response with the following error message:
```text
{"message":"Access denied"}
```
### Restrict Access Using Modified IP
The following example demonstrates how you can modify the IP used for IP restriction, using the `real-ip` Plugin. This is particularly useful if APISIX is behind a reverse proxy and the real client IP is not available to APISIX.
Create a Route with the `ip-restriction` Plugin to whitelist a specific IP address and obtain client IP address from the URL parameter `realip`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "ip-restriction-route",
"uri": "/anything",
"plugins": {
"ip-restriction": {
"whitelist": [
"192.168.1.241"
]
},
"real-ip": {
"source": "arg_realip"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything?realip=192.168.1.241"
```
You should receive an `HTTP/1.1 200 OK` response.
Send another request with a different IP address:
```shell
curl -i "http://127.0.0.1:9080/anything?realip=192.168.10.24"
```
You should receive an `HTTP/1.1 403 Forbidden` response.

View File

@@ -0,0 +1,198 @@
---
title: jwe-decrypt
keywords:
- Apache APISIX
- API Gateway
- Plugin
- JWE Decrypt
- jwe-decrypt
description: This document contains information about the Apache APISIX jwe-decrypt Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `jwe-decrypt` Plugin is used to decrypt [JWE](https://datatracker.ietf.org/doc/html/rfc7516) authorization headers in requests to an APISIX [Service](../terminology/service.md) or [Route](../terminology/route.md).
This Plugin adds an endpoint `/apisix/plugin/jwe/encrypt` for JWE encryption. For decryption, the key should be configured in [Consumer](../terminology/consumer.md).
## Attributes
For Consumer:
| Name | Type | Required | Default | Valid values | Description |
|---------------|---------|-------------------------------------------------------|---------|-----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------|
| key | string | True | | | Unique key for a Consumer. |
| secret | string | True | | | The decryption key. Must be 32 characters. The key could be saved in a secret manager using the [Secret](../terminology/secret.md) resource. |
| is_base64_encoded | boolean | False | false | | Set to true if the secret is base64 encoded. |
:::note
After enabling `is_base64_encoded`, your `secret` length may exceed 32 chars. You only need to make sure that the length after decoding is still 32 chars.
:::
For Route:
| Name | Type | Required | Default | Description |
|--------|--------|----------|---------------|---------------------------------------------------------------------|
| header | string | True | Authorization | The header to get the token from. |
| forward_header | string | True | Authorization | Set the header name that passes the plaintext to the Upstream. |
| strict | boolean | False | true | If true, throw a 403 error if JWE token is missing from the request. If false, do not throw an error if JWE token cannot be found. |
## Example usage
First, create a Consumer with `jwe-decrypt` and configure the decryption key:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/consumers -H "X-API-KEY: $admin_key" -X PUT -d '
{
"username": "jack",
"plugins": {
"jwe-decrypt": {
"key": "user-key",
"secret": "-secret-length-must-be-32-chars-"
}
}
}'
```
Next, create a Route with `jwe-decrypt` enabled to decrypt the authorization header:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/anything*",
"plugins": {
"jwe-decrypt": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
### Encrypt Data with JWE
The Plugin creates an internal endpoint `/apisix/plugin/jwe/encrypt` to encrypt data with JWE. To expose it publicly, create a Route with the [public-api](public-api.md) Plugin:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/jwenew -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/apisix/plugin/jwe/encrypt",
"plugins": {
"public-api": {}
}
}'
```
Send a request to the endpoint passing the key configured in Consumer to the URI parameter to encrypt some sample data in the payload:
```shell
curl -G --data-urlencode 'payload={"uid":10000,"uname":"test"}' 'http://127.0.0.1:9080/apisix/plugin/jwe/encrypt?key=user-key' -i
```
You should see a response similar to the following, with the JWE encrypted data in the response body:
```
HTTP/1.1 200 OK
Date: Mon, 25 Sep 2023 02:38:16 GMT
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: APISIX/3.5.0
Apisix-Plugins: public-api
eyJhbGciOiJkaXIiLCJraWQiOiJ1c2VyLWtleSIsImVuYyI6IkEyNTZHQ00ifQ..MTIzNDU2Nzg5MDEy.hfzMJ0YfmbMcJ0ojgv4PYAHxPjlgMivmv35MiA.7nilnBt2dxLR_O6kf-HQUA
```
### Decrypt Data with JWE
Send a request to the route with the JWE encrypted data in the `Authorization` header:
```shell
curl http://127.0.0.1:9080/anything/hello -H 'Authorization: eyJhbGciOiJkaXIiLCJraWQiOiJ1c2VyLWtleSIsImVuYyI6IkEyNTZHQ00ifQ..MTIzNDU2Nzg5MDEy.hfzMJ0YfmbMcJ0ojgv4PYAHxPjlgMivmv35MiA.7nilnBt2dxLR_O6kf-HQUA' -i
```
You should see a response similar to the following, where the `Authorization` header shows the plaintext of the payload:
```
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 452
Connection: keep-alive
Date: Mon, 25 Sep 2023 02:38:59 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Server: APISIX/3.5.0
Apisix-Plugins: jwe-decrypt
{
"args": {},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Authorization": "{\"uid\":10000,\"uname\":\"test\"}",
"Host": "127.0.0.1",
"User-Agent": "curl/8.1.2",
"X-Amzn-Trace-Id": "Root=1-6510f2c3-1586ec011a22b5094dbe1896",
"X-Forwarded-Host": "127.0.0.1"
},
"json": null,
"method": "GET",
"origin": "127.0.0.1, 119.143.79.94",
"url": "http://127.0.0.1/anything/hello"
}
```
## Delete Plugin
To remove the `jwe-decrypt` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/anything*",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```

View File

@@ -0,0 +1,911 @@
---
title: jwt-auth
keywords:
- Apache APISIX
- API Gateway
- Plugin
- JWT Auth
- jwt-auth
description: The jwt-auth Plugin supports the use of JSON Web Token (JWT) as a mechanism for clients to authenticate themselves before accessing Upstream resources.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/jwt-auth" />
</head>
## Description
The `jwt-auth` Plugin supports the use of [JSON Web Token (JWT)](https://jwt.io/) as a mechanism for clients to authenticate themselves before accessing Upstream resources.
Once enabled, the Plugin exposes an endpoint to create JWT credentials by [Consumers](../terminology/consumer.md). The process generates a token that client requests should carry to identify themselves to APISIX. The token can be included in the request URL query string, request header, or cookie. APISIX will then verify the token to determine if a request should be allowed or denied to access Upstream resources.
When a Consumer is successfully authenticated, APISIX adds additional headers, such as `X-Consumer-Username`, `X-Credential-Indentifier`, and other Consumer custom headers if configured, to the request, before proxying it to the Upstream service. The Upstream service will be able to differentiate between consumers and implement additional logics as needed. If any of these values is not available, the corresponding header will not be added.
## Attributes
For Consumer/Credential:
| Name | Type | Required | Default | Valid values | Description |
|---------------|---------|-------------------------------------------------------|---------|-----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| key | string | True | | non-empty | Unique key for a Consumer. |
| secret | string | False | | non-empty | Shared key used to sign and verify the JWT when the algorithm is symmetric. Required when using `HS256` or `HS512` as the algorithm. If unspecified, the secret will be auto-generated. This field supports saving the value in Secret Manager using the [APISIX Secret](../terminology/secret.md) resource. |
| public_key | string | True if `RS256` or `ES256` is set for the `algorithm` attribute. | | | RSA or ECDSA public key. This field supports saving the value in Secret Manager using the [APISIX Secret](../terminology/secret.md) resource. |
| algorithm | string | False | HS256 | ["HS256", "HS512", "RS256", "ES256"] | Encryption algorithm. |
| exp | integer | False | 86400 | [1,...] | Expiry time of the token in seconds. |
| base64_secret | boolean | False | false | | Set to true if the secret is base64 encoded. |
| lifetime_grace_period | integer | False | 0 | [0,...] | Grace period in seconds. Used to account for clock skew between the server generating the JWT and the server validating the JWT. |
| key_claim_name | string | False | key | | The claim in the JWT payload that identifies the associated secret, such as `iss`. |
NOTE: `encrypt_fields = {"secret"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
For Routes or Services:
| Name | Type | Required | Default | Description |
|--------|--------|----------|---------------|---------------------------------------------------------------------|
| header | string | False | authorization | The header to get the token from. |
| query | string | False | jwt | The query string to get the token from. Lower priority than header. |
| cookie | string | False | jwt | The cookie to get the token from. Lower priority than query. |
| hide_credentials| boolean | False | false | If true, do not pass the header, query, or cookie with JWT to Upstream services. |
| key_claim_name | string | False | key | The name of the JWT claim that contains the user key (corresponds to Consumer's key attribute). |
| anonymous_consumer | string | False | false | Anonymous Consumer name. If configured, allow anonymous users to bypass the authentication. |
| store_in_ctx | boolean | False | false | Set to true will store the JWT payload in the request context (`ctx.jwt_auth_payload`). This allows lower-priority plugins that run afterwards on the same request to retrieve and use the JWT token. |
You can implement `jwt-auth` with [HashiCorp Vault](https://www.vaultproject.io/) to store and fetch secrets and RSA keys pairs from its [encrypted KV engine](https://developer.hashicorp.com/vault/docs/secrets/kv) using the [APISIX Secret](../terminology/secret.md) resource.
## Examples
The examples below demonstrate how you can work with the `jwt-auth` Plugin for different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Use JWT for Consumer Authentication
The following example demonstrates how to implement JWT for Consumer key authentication.
Create a Consumer `jack`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jack"
}'
```
Create `jwt-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jack/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jack-jwt-auth",
"plugins": {
"jwt-auth": {
"key": "jack-key",
"secret": "jack-hs256-secret"
}
}
}'
```
Create a Route with `jwt-auth` plugin:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "jwt-route",
"uri": "/headers",
"plugins": {
"jwt-auth": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
To issue a JWT for `jack`, you could use [JWT.io's debugger](https://jwt.io/#debugger-io) or other utilities. If you are using [JWT.io's debugger](https://jwt.io/#debugger-io), do the following:
* Select __HS256__ in the __Algorithm__ dropdown.
* Update the secret in the __Verify Signature__ section to be `jack-hs256-secret`.
* Update payload with Consumer key `jack-key`; and add `exp` or `nbf` in UNIX timestamp.
Your payload should look similar to the following:
```json
{
"key": "jack-key",
"nbf": 1729132271
}
```
Copy the generated JWT under the __Encoded__ section and save to a variable:
```text
jwt_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJqYWNrLWtleSIsIm5iZiI6MTcyOTEzMjI3MX0.0VDKUzNkSaa_H5g_rGNbNtDcKJ9fBGgcGC56AsVsV-I
```
Send a request to the Route with the JWT in the `Authorization` header:
```shell
curl -i "http://127.0.0.1:9080/headers" -H "Authorization: ${jwt_token}"
```
You should receive an `HTTP/1.1 200 OK` response similar to the following:
```text
{
"headers": {
"Accept": "*/*",
"Authorization": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHAiOjE3MjY2NDk2NDAsImtleSI6ImphY2sta2V5In0.kdhumNWrZFxjUvYzWLt4lFr546PNsr9TXuf0Az5opoM",
"Host": "127.0.0.1",
"User-Agent": "curl/8.6.0",
"X-Amzn-Trace-Id": "Root=1-66ea951a-4d740d724bd2a44f174d4daf",
"X-Consumer-Username": "jack",
"X-Credential-Identifier": "cred-jack-jwt-auth",
"X-Forwarded-Host": "127.0.0.1"
}
}
```
In 30 seconds, the token should expire. Send a request with the same token to verify:
```shell
curl -i "http://127.0.0.1:9080/headers" -H "Authorization: ${jwt_token}"
```
You should receive an `HTTP/1.1 401 Unauthorized` response similar to the following:
```text
{"message":"failed to verify jwt"}
```
### Carry JWT in Request Header, Query String, or Cookie
The following example demonstrates how to accept JWT in specified header, query string, and cookie.
Create a Consumer `jack`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jack"
}'
```
Create `jwt-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jack/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jack-jwt-auth",
"plugins": {
"jwt-auth": {
"key": "jack-key",
"secret": "jack-hs256-secret"
}
}
}'
```
Create a Route with `jwt-auth` Plugin, and specify that the request can either carry the token in the header, query, or the cookie:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "jwt-route",
"uri": "/get",
"plugins": {
"jwt-auth": {
"header": "jwt-auth-header",
"query": "jwt-query",
"cookie": "jwt-cookie"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
To issue a JWT for `jack`, you could use [JWT.io's debugger](https://jwt.io/#debugger-io) or other utilities. If you are using [JWT.io's debugger](https://jwt.io/#debugger-io), do the following:
* Select __HS256__ in the __Algorithm__ dropdown.
* Update the secret in the __Verify Signature__ section to be `jack-hs256-secret`.
* Update payload with Consumer key `jack-key`; and add `exp` or `nbf` in UNIX timestamp.
Your payload should look similar to the following:
```json
{
"key": "jack-key",
"nbf": 1729132271
}
```
Copy the generated JWT under the __Encoded__ section and save to a variable:
```text
jwt_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJqYWNrLWtleSIsIm5iZiI6MTcyOTEzMjI3MX0.0VDKUzNkSaa_H5g_rGNbNtDcKJ9fBGgcGC56AsVsV-I
```
#### Verify With JWT in Header
Sending request with JWT in the header:
```shell
curl -i "http://127.0.0.1:9080/get" -H "jwt-auth-header: ${jwt_token}"
```
You should receive an `HTTP/1.1 200 OK` response similar to the following:
```text
{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "127.0.0.1",
"Jwt-Auth-Header": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJrZXkiOiJ1c2VyLWtleSIsImV4cCI6MTY5NTEyOTA0NH0.EiktFX7di_tBbspbjmqDKoWAD9JG39Wo_CAQ1LZ9voQ",
...
},
...
}
```
#### Verify With JWT in Query String
Sending request with JWT in the query string:
```shell
curl -i "http://127.0.0.1:9080/get?jwt-query=${jwt_token}"
```
You should receive an `HTTP/1.1 200 OK` response similar to the following:
```text
{
"args": {
"jwt-query": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJrZXkiOiJ1c2VyLWtleSIsImV4cCI6MTY5NTEyOTA0NH0.EiktFX7di_tBbspbjmqDKoWAD9JG39Wo_CAQ1LZ9voQ"
},
"headers": {
"Accept": "*/*",
...
},
"origin": "127.0.0.1, 183.17.233.107",
"url": "http://127.0.0.1/get?jwt-query=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJrZXkiOiJ1c2VyLWtleSIsImV4cCI6MTY5NTEyOTA0NH0.EiktFX7di_tBbspbjmqDKoWAD9JG39Wo_CAQ1LZ9voQ"
}
```
#### Verify With JWT in Cookie
Sending request with JWT in the cookie:
```shell
curl -i "http://127.0.0.1:9080/get" --cookie jwt-cookie=${jwt_token}
```
You should receive an `HTTP/1.1 200 OK` response similar to the following:
```text
{
"args": {},
"headers": {
"Accept": "*/*",
"Cookie": "jwt-cookie=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJrZXkiOiJ1c2VyLWtleSIsImV4cCI6MTY5NTEyOTA0NH0.EiktFX7di_tBbspbjmqDKoWAD9JG39Wo_CAQ1LZ9voQ",
...
},
...
}
```
### Manage Secrets in Environment Variables
The following example demonstrates how to save `jwt-auth` Consumer key to an environment variable and reference it in configuration.
APISIX supports referencing system and user environment variables configured through the [NGINX `env` directive](https://nginx.org/en/docs/ngx_core_module.html#env).
Save the key to an environment variable:
```shell
JACK_JWT_AUTH_KEY=jack-key
```
Create a Consumer `jack`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jack"
}'
```
Create `jwt-auth` Credential for the Consumer and reference the environment variable in the key:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jack/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jack-jwt-auth",
"plugins": {
"jwt-auth": {
"key": "$env://JACK_JWT_AUTH_KEY",
"secret": "jack-hs256-secret"
}
}
}'
```
Create a Route with `jwt-auth` enabled:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "jwt-route",
"uri": "/get",
"plugins": {
"jwt-auth": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
To issue a JWT for `jack`, you could use [JWT.io's debugger](https://jwt.io/#debugger-io) or other utilities. If you are using [JWT.io's debugger](https://jwt.io/#debugger-io), do the following:
* Select __HS256__ in the __Algorithm__ dropdown.
* Update the secret in the __Verify Signature__ section to be `jack-hs256-secret`.
* Update payload with Consumer key `jack-key`; and add `exp` or `nbf` in UNIX timestamp.
Your payload should look similar to the following:
```json
{
"key": "jack-key",
"nbf": 1729132271
}
```
Copy the generated JWT under the __Encoded__ section and save to a variable:
```text
jwt_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJqYWNrLWtleSIsIm5iZiI6MTcyOTEzMjI3MX0.0VDKUzNkSaa_H5g_rGNbNtDcKJ9fBGgcGC56AsVsV-I
```
Sending request with JWT in the header:
```shell
curl -i "http://127.0.0.1:9080/get" -H "Authorization: ${jwt_token}"
```
You should receive an `HTTP/1.1 200 OK` response similar to the following:
```text
{
"args": {},
"headers": {
"Accept": "*/*",
"Authorization": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHAiOjE2OTUxMzMxNTUsImtleSI6Imp3dC1rZXkifQ.jiKuaAJqHNSSQCjXRomwnQXmdkC5Wp5VDPRsJlh1WAQ",
...
},
...
}
```
### Manage Secrets in Secret Manager
The following example demonstrates how to manage `jwt-auth` Consumer key in [HashiCorp Vault](https://www.vaultproject.io) and reference it in Plugin configuration.
Start a Vault development server in Docker:
```shell
docker run -d \
--name vault \
-p 8200:8200 \
--cap-add IPC_LOCK \
-e VAULT_DEV_ROOT_TOKEN_ID=root \
-e VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:8200 \
vault:1.9.0 \
vault server -dev
```
APISIX currently supports [Vault KV engine version 1](https://developer.hashicorp.com/vault/docs/secrets/kv#kv-version-1). Enable it in Vault:
```shell
docker exec -i vault sh -c "VAULT_TOKEN='root' VAULT_ADDR='http://0.0.0.0:8200' vault secrets enable -path=kv -version=1 kv"
```
You should see a response similar to the following:
```text
Success! Enabled the kv secrets engine at: kv/
```
Create a secret and configure the Vault address and other connection information:
```shell
curl "http://127.0.0.1:9180/apisix/admin/secrets/vault/jwt" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"uri": "https://127.0.0.1:8200"
"prefix": "kv/apisix",
"token": "root"
}'
```
Create a Consumer `jack`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jack"
}'
```
Create `jwt-auth` Credential for the Consumer and reference the secret in the key:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jack/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jack-jwt-auth",
"plugins": {
"jwt-auth": {
"key": "$secret://vault/jwt/jack/jwt-key",
"secret": "vault-hs256-secret"
}
}
}'
```
Create a Route with `jwt-auth` enabled:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "jwt-route",
"uri": "/get",
"plugins": {
"jwt-auth": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Set `jwt-auth` key value to be `jwt-vault-key` in Vault:
```shell
docker exec -i vault sh -c "VAULT_TOKEN='root' VAULT_ADDR='http://0.0.0.0:8200' vault kv put kv/apisix/jack jwt-key=jwt-vault-key"
```
You should see a response similar to the following:
```text
Success! Data written to: kv/apisix/jack
```
To issue a JWT, you could use [JWT.io's debugger](https://jwt.io/#debugger-io) or other utilities. If you are using [JWT.io's debugger](https://jwt.io/#debugger-io), do the following:
* Select __HS256__ in the __Algorithm__ dropdown.
* Update the secret in the __Verify Signature__ section to be `vault-hs256-secret`.
* Update payload with Consumer key `jwt-vault-key`; and add `exp` or `nbf` in UNIX timestamp.
Your payload should look similar to the following:
```json
{
"key": "jwt-vault-key",
"nbf": 1729132271
}
```
Copy the generated JWT under the __Encoded__ section and save to a variable:
```text
jwt_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJqd3QtdmF1bHQta2V5IiwibmJmIjoxNzI5MTMyMjcxfQ.faiN93LNP1lGSXqAb4empNJKMRWop8-KgnU58VQn1EE
```
Sending request with the token as header:
```shell
curl -i "http://127.0.0.1:9080/get" -H "Authorization: ${jwt_token}"
```
You should receive an `HTTP/1.1 200 OK` response similar to the following:
```text
{
"args": {},
"headers": {
"Accept": "*/*",
"Authorization": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJqd3QtdmF1bHQta2V5IiwiZXhwIjoxNjk1MTM4NjM1fQ.Au2liSZ8eQXUJR3SJESwNlIfqZdNyRyxIJK03L4dk_g",
...
},
...
}
```
### Sign JWT with RS256 Algorithm
The following example demonstrates how you can use asymmetric algorithms, such as RS256, to sign and validate JWT when implementing JWT for Consumer authentication. You will be generating RSA key pairs using [openssl](https://openssl-library.org/source/) and generating JWT using [JWT.io](https://jwt.io/#debugger-io) to better understand the composition of JWT.
Generate a 2048-bit RSA private key and extract the corresponding public key in PEM format:
```shell
openssl genrsa -out jwt-rsa256-private.pem 2048
openssl rsa -in jwt-rsa256-private.pem -pubout -out jwt-rsa256-public.pem
```
You should see `jwt-rsa256-private.pem` and `jwt-rsa256-public.pem` generated in your current working directory.
Visit [JWT.io's debugger](https://jwt.io/#debugger-io) and do the following:
* Select __RS256__ in the __Algorithm__ dropdown.
* Copy and paste the key content into the __Verify Signature__ section.
* Update the payload with `key` matching the Consumer key you would like to use; and `exp` or `nbf` in UNIX timestamp.
The configuration should look similar to the following:
<br />
<div style={{textAlign: 'center'}}>
<img
src="https://static.apiseven.com/uploads/2024/12/12/SRe7AXMw_jwt_token.png"
alt="complete configuration of JWT generation on jwt.io"
width="70%"
/>
</div>
<br />
Copy the JWT on the left and save to an environment variable:
```shell
jwt_token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJqYWNrLWtleSIsImV4cCI6MTczNDIzMDQwMH0.XjqM0oszmCggwZs-8PUIlJv8wPJON1la2ET5v70E6TCE32Yq5ibrl-1azaK7IreAer3HtnVHeEfII2rR02v8xfR1TPIjU_oHov4qC-A4tLTbgqGVXI7fCy2WFm3PFh6MEKuRe6M3dCQtCAdkRRQrBr1gWFQZhV3TNeMmmtyIfuJpB7cp4DW5pYFsCcoE1Nw6Tz7dt8k0tPBTPI2Mv9AYfMJ30LHDscOaPNtz8YIk_TOkV9b9mhQudUJ7J_suCZMRxD3iL655jTp2gKsstGKdZa0_W9Reu4-HY3LSc5DS1XtfjuftpuUqgg9FvPU0mK_b0wT_Rq3lbYhcHb9GZ72qiQ
```
Create a Consumer `jack`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jack"
}'
```
Create `jwt-auth` Credential for the Consumer and configure the RSA keys:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jack/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jack-jwt-auth",
"plugins": {
"jwt-auth": {
"key": "jack-key",
"algorithm": "RS256",
"public_key": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnE0h4k/GWfEbYO/yE2MPjHtNKDLNz4mv1KNIPLxY2ccjPYOtjuug+iZ4MujLV59YfrHriTs0H8jweQfff3pRSMjyEK+4qWTY3TeKBXIEa3pVDeoedSJrgjLBVio6xH7et8ir+QScScfLaJHGB4/l3DDGyEhO782a9teY8brn5hsWX5uLmDJvxtTGAHYi847XOcx2UneW4tZ8wQ6JGBSiSg5qAHan4dFZ7CpixCNNqEcSK6EQ7lKOLeFGG8ys/dHBIEasU4oMlCuJH77+XQQ/shchy+vm9oZfP+grLZkV+nKAd8MQZsid7ZJ/fiB/BmnhGrjtIfh98jwxSx4DgdLhdwIDAQAB\n-----END PUBLIC KEY-----",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQCcTSHiT8ZZ8Rtg7/ITYw+Me00oMs3Pia/Uo0g8vFjZxyM9g62O66D6Jngy6MtXn1h+seuJOzQfyPB5B99/elFIyPIQr7ipZNjdN4oFcgRrelUN6h51ImuCMsFWKjrEft63yKv5BJxJx8tokcYHj+XcMMbISE7vzZr215jxuufmGxZfm4uYMm/G1MYAdiLzjtc5zHZSd5bi1nzBDokYFKJKDmoAdqfh0VnsKmLEI02oRxIroRDuUo4t4UYbzKz90cEgRqxTigyUK4kfvv5dBD+yFyHL6+b2hl8/6CstmRX6coB3wxBmyJ3tkn9+IH8GaeEauO0h+H3yPDFLHgOB0uF3AgMBAAECggEARpY68Daw0Funzq5uN70r/3iLztSqx8hZpQEclXlF8wwQ6S33iqz1JSOMcwlZE7g9wfHd+jrHfndDypT4pVx7KxC86TZCghWuLrFvXqgwQM2dbcxGdwXVYZZEZAJsSeM19+/jYnFnl5ZoUVBMC4w79aX9j+O/6mKDUmjphHmxUuRCFjN0w7BRoYwmS796rSf1eoOcSXh2G9Ycc34DUFDfGpOzabndbmMfOz7W0DyUBG23fgLhNChTUGq8vMaqKXkQ8JKeKdEugSmRGz42HxjWoNlIGBDyB8tPNPT6SXsu/JBskdf9Gb71OWiub381oXC259sz+1K1REb1KSkgyC+bkQKBgQDKCnwXaf8aOIoJPCG53EqQfKScCIYQrvp1Uk3bs5tfYN4HcI3yAUnOqQ3Ux3eY9PfS37urlJXCfCbCnZ6P6xALZnN+aL2zWvZArlHvD6vnXiyevwK5IY+o2EW02h3A548wrGznQSsfX0tum22bEVlRuFfBbpZpizXwrV4ODSNhTwKBgQDGC27QQxah3yq6EbOhJJlJegjawVXEaEp/j4fD3qe/unLbUIFvCz6j9BAbgocDKzqXxlpTtIbnsesdLo7KM3MtYL0XO/87HIsBj9XCVgMkFCcM6YZ6fHnkJl0bs3haU4N9uI/wpokvfvXJp7iC9LUCseBdBj+N6T230HWiSbPjWQKBgQC8zzGKO/8vRNkSqkQmSczQ2/qE6p5G5w6eJy0lfOJdLswvDatJFpUf8PJA/6svoPYb9gOO5AtUNeuPAfeVLSnQTYzu+/kTrJTme0GMdAvE60gtjfmAgvGa64mw6gjWJk+1P92B+2/OIKMAmXXDbWIYMXqpBKzBs1vUMF/uJ68BlwKBgQDEivQem3YKj3/HyWmLstatpP7EmrqTgSzuC3OhX4b7L/5sySirG22/KKgTpSZ4bp5noeJiz/ZSWrAK9fmfkg/sKOV/+XsDHwCVPDnX86SKWbWnitp7FK2jTq94nlQC0H7edhvjqGLdUBJ9XoYu8MvzMLSJnXnVTHSDx832kU6FgQKBgQCbw4Eiu2IcOduIAokmsZl8Smh9ZeyhP2B/UBa1hsiPKQ6bw86QJr2OMbRXLBxtx+HYIfwDo4vXEE862PfoQyu6SjJBNmHiid7XcV06Z104UQNjP7IDLMMF+SASMqYoQWg/5chPfxBgIXnfWqw6TMmND3THY4Oj4Nhf4xeUg3HsaA==\n-----END PRIVATE KEY-----"
}
}
}'
```
:::tip
You should add a newline character after the opening line and before the closing line, for example `-----BEGIN PRIVATE KEY-----\n......\n-----END PRIVATE KEY-----`.
The key content can be directly concatenated.
:::
Create a Route with the `jwt-auth` Plugin:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "jwt-route",
"uri": "/headers",
"plugins": {
"jwt-auth": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
To verify, send a request to the Route with the JWT in the `Authorization` header:
```shell
curl -i "http://127.0.0.1:9080/headers" -H "Authorization: ${jwt_token}"
```
You should receive an `HTTP/1.1 200 OK` response similar to the following:
```json
{
"headers": {
"Accept": "*/*",
"Authorization": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJqYWNrLWtleSIsImV4cCI6MTczNDIzMDQwMH0.XjqM0oszmCggwZs-8PUIlJv8wPJON1la2ET5v70E6TCE32Yq5ibrl-1azaK7IreAer3HtnVHeEfII2rR02v8xfR1TPIjU_oHov4qC-A4tLTbgqGVXI7fCy2WFm3PFh6MEKuRe6M3dCQtCAdkRRQrBr1gWFQZhV3TNeMmmtyIfuJpB7cp4DW5pYFsCcoE1Nw6Tz7dt8k0tPBTPI2Mv9AYfMJ30LHDscOaPNtz8YIk_TOkV9b9mhQudUJ7J_suCZMRxD3iL655jTp2gKsstGKdZa0_W9Reu4-HY3LSc5DS1XtfjuftpuUqgg9FvPU0mK_b0wT_Rq3lbYhcHb9GZ72qiQ",
...
}
}
```
### Add Consumer Custom ID to Header
The following example demonstrates how you can attach a Consumer custom ID to authenticated request in the `Consumer-Custom-Id` header, which can be used to implement additional logics as needed.
Create a Consumer `jack` with a custom ID label:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jack",
"labels": {
"custom_id": "495aec6a"
}
}'
```
Create `jwt-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jack/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jack-jwt-auth",
"plugins": {
"jwt-auth": {
"key": "jack-key",
"secret": "jack-hs256-secret"
}
}
}'
```
Create a Route with `jwt-auth`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "jwt-auth-route",
"uri": "/anything",
"plugins": {
"jwt-auth": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
To issue a JWT for `jack`, you could use [JWT.io's debugger](https://jwt.io/#debugger-io) or other utilities. If you are using [JWT.io's debugger](https://jwt.io/#debugger-io), do the following:
* Select __HS256__ in the __Algorithm__ dropdown.
* Update the secret in the __Verify Signature__ section to be `jack-hs256-secret`.
* Update payload with Consumer key `jack-key`; and add `exp` or `nbf` in UNIX timestamp.
Your payload should look similar to the following:
```json
{
"key": "jack-key",
"nbf": 1729132271
}
```
Copy the generated JWT under the __Encoded__ section and save to a variable:
```text
jwt_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJqYWNrLWtleSIsIm5iZiI6MTcyOTEzMjI3MX0.0VDKUzNkSaa_H5g_rGNbNtDcKJ9fBGgcGC56AsVsV-I
```
To verify, send a request to the Route with the JWT in the `Authorization` header:
```shell
curl -i "http://127.0.0.1:9080/headers" -H "Authorization: ${jwt_token}"
```
You should see an `HTTP/1.1 200 OK` response similar to the following, where `X-Consumer-Custom-Id` is attached:
```json
{
"headers": {
"Accept": "*/*",
"Authorization": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHAiOjE3MjY2NDk2NDAsImtleSI6ImphY2sta2V5In0.kdhumNWrZFxjUvYzWLt4lFr546PNsr9TXuf0Az5opoM",
"Host": "127.0.0.1",
"User-Agent": "curl/8.6.0",
"X-Amzn-Trace-Id": "Root=1-66ea951a-4d740d724bd2a44f174d4daf",
"X-Consumer-Username": "jack",
"X-Credential-Identifier": "cred-jack-jwt-auth",
"X-Consumer-Custom-Id": "495aec6a",
"X-Forwarded-Host": "127.0.0.1"
}
}
```
### Rate Limit with Anonymous Consumer
The following example demonstrates how you can configure different rate limiting policies by regular and anonymous consumers, where the anonymous Consumer does not need to authenticate and has less quotas.
Create a regular Consumer `jack` and configure the `limit-count` Plugin to allow for a quota of 3 within a 30-second window:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jack",
"plugins": {
"limit-count": {
"count": 3,
"time_window": 30,
"rejected_code": 429
}
}
}'
```
Create the `jwt-auth` Credential for the Consumer `jack`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jack/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jack-jwt-auth",
"plugins": {
"jwt-auth": {
"key": "jack-key",
"secret": "jack-hs256-secret"
}
}
}'
```
Create an anonymous user `anonymous` and configure the `limit-count` Plugin to allow for a quota of 1 within a 30-second window:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "anonymous",
"plugins": {
"limit-count": {
"count": 1,
"time_window": 30,
"rejected_code": 429
}
}
}'
```
Create a Route and configure the `jwt-auth` Plugin to accept anonymous Consumer `anonymous` from bypassing the authentication:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "jwt-auth-route",
"uri": "/anything",
"plugins": {
"jwt-auth": {
"anonymous_consumer": "anonymous"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
To issue a JWT for `jack`, you could use [JWT.io's debugger](https://jwt.io/#debugger-io) or other utilities. If you are using [JWT.io's debugger](https://jwt.io/#debugger-io), do the following:
* Select __HS256__ in the __Algorithm__ dropdown.
* Update the secret in the __Verify Signature__ section to be `jack-hs256-secret`.
* Update payload with role `user`, permission `read`, and Consumer key `jack-key`; as well as `exp` or `nbf` in UNIX timestamp.
Your payload should look similar to the following:
```json
{
"key": "jack-key",
"nbf": 1729132271
}
```
Copy the generated JWT under the __Encoded__ section and save to a variable:
```shell
jwt_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJqYWNrLWtleSIsIm5iZiI6MTcyOTEzMjI3MX0.hjtSsEILpko14zb8-ibyxrB2tA5biYY9JrFm3do69vs
```
To verify the rate limiting, send five consecutive requests with `jack`'s JWT:
```shell
resp=$(seq 5 | xargs -I{} curl "http://127.0.0.1:9080/anything" -H "Authorization: ${jwt_token}" -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200": $count_200, "429": $count_429
```
You should see the following response, showing that out of the 5 requests, 3 requests were successful (status code 200) while the others were rejected (status code 429).
```text
200: 3, 429: 2
```
Send five anonymous requests:
```shell
resp=$(seq 5 | xargs -I{} curl "http://127.0.0.1:9080/anything" -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200": $count_200, "429": $count_429
```
You should see the following response, showing that only one request was successful:
```text
200: 1, 429: 4
```

View File

@@ -0,0 +1,249 @@
---
title: kafka-logger
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Kafka Logger
description: This document contains information about the Apache APISIX kafka-logger Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `kafka-logger` Plugin is used to push logs as JSON objects to Apache Kafka clusters. It works as a Kafka client driver for the ngx_lua Nginx module.
It might take some time to receive the log data. It will be automatically sent after the timer function in the [batch processor](../batch-processor.md) expires.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
| ---------------------- | ------- | -------- | -------------- | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| broker_list | object | True | | | Deprecated, use `brokers` instead. List of Kafka brokers. (nodes). |
| brokers | array | True | | | List of Kafka brokers (nodes). |
| brokers.host | string | True | | | The host of Kafka broker, e.g, `192.168.1.1`. |
| brokers.port | integer | True | | [0, 65535] | The port of Kafka broker |
| brokers.sasl_config | object | False | | | The sasl config of Kafka broker |
| brokers.sasl_config.mechanism | string | False | "PLAIN" | ["PLAIN"] | The mechaism of sasl config |
| brokers.sasl_config.user | string | True | | | The user of sasl_config. If sasl_config exists, it's required. |
| brokers.sasl_config.password | string | True | | | The password of sasl_config. If sasl_config exists, it's required. |
| kafka_topic | string | True | | | Target topic to push the logs for organisation. |
| producer_type | string | False | async | ["async", "sync"] | Message sending mode of the producer. |
| required_acks | integer | False | 1 | [1, -1] | Number of acknowledgements the leader needs to receive for the producer to consider the request complete. This controls the durability of the sent records. The attribute follows the same configuration as the Kafka `acks` attribute. `required_acks` cannot be 0. See [Apache Kafka documentation](https://kafka.apache.org/documentation/#producerconfigs_acks) for more. |
| key | string | False | | | Key used for allocating partitions for messages. |
| timeout | integer | False | 3 | [1,...] | Timeout for the upstream to send data. |
| name | string | False | "kafka logger" | | Unique identifier for the batch processor. If you use Prometheus to monitor APISIX metrics, the name is exported in `apisix_batch_process_entries`. |
| meta_format | enum | False | "default" | ["default""origin"] | Format to collect the request information. Setting to `default` collects the information in JSON format and `origin` collects the information with the original HTTP request. See [examples](#meta_format-example) below. |
| log_format | object | False | | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
| include_req_body | boolean | False | false | [false, true] | When set to `true` includes the request body in the log. If the request body is too big to be kept in the memory, it can't be logged due to Nginx's limitations. |
| include_req_body_expr | array | False | | | Filter for when the `include_req_body` attribute is set to `true`. Request body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
| max_req_body_bytes | integer | False | 524288 | >=1 | Maximum request body allowed in bytes. Request bodies falling within this limit will be pushed to Kafka. If the size exceeds the configured value, the body will be truncated before being pushed to Kafka. |
| include_resp_body | boolean | False | false | [false, true] | When set to `true` includes the response body in the log. |
| include_resp_body_expr | array | False | | | Filter for when the `include_resp_body` attribute is set to `true`. Response body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
| max_resp_body_bytes | integer | False | 524288 | >=1 | Maximum response body allowed in bytes. Response bodies falling within this limit will be pushed to Kafka. If the size exceeds the configured value, the body will be truncated before being pushed to Kafka. |
| cluster_name | integer | False | 1 | [0,...] | Name of the cluster. Used when there are two or more Kafka clusters. Only works if the `producer_type` attribute is set to `async`. |
| producer_batch_num | integer | optional | 200 | [1,...] | `batch_num` parameter in [lua-resty-kafka](https://github.com/doujiang24/lua-resty-kafka). The merge message and batch is send to the server. Unit is message count. |
| producer_batch_size | integer | optional | 1048576 | [0,...] | `batch_size` parameter in [lua-resty-kafka](https://github.com/doujiang24/lua-resty-kafka) in bytes. |
| producer_max_buffering | integer | optional | 50000 | [1,...] | `max_buffering` parameter in [lua-resty-kafka](https://github.com/doujiang24/lua-resty-kafka) representing maximum buffer size. Unit is message count. |
| producer_time_linger | integer | optional | 1 | [1,...] | `flush_time` parameter in [lua-resty-kafka](https://github.com/doujiang24/lua-resty-kafka) in seconds. |
| meta_refresh_interval | integer | optional | 30 | [1,...] | `refresh_interval` parameter in [lua-resty-kafka](https://github.com/doujiang24/lua-resty-kafka) specifies the time to auto refresh the metadata, in seconds. |
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
:::info IMPORTANT
The data is first written to a buffer. When the buffer exceeds the `batch_max_size` or `buffer_duration` attribute, the data is sent to the Kafka server and the buffer is flushed.
If the process is successful, it will return `true` and if it fails, returns `nil` with a string with the "buffer overflow" error.
:::
### meta_format example
- `default`:
```json
{
"upstream": "127.0.0.1:1980",
"start_time": 1619414294760,
"client_ip": "127.0.0.1",
"service_id": "",
"route_id": "1",
"request": {
"querystring": {
"ab": "cd"
},
"size": 90,
"uri": "/hello?ab=cd",
"url": "http://localhost:1984/hello?ab=cd",
"headers": {
"host": "localhost",
"content-length": "6",
"connection": "close"
},
"body": "abcdef",
"method": "GET"
},
"response": {
"headers": {
"connection": "close",
"content-type": "text/plain; charset=utf-8",
"date": "Mon, 26 Apr 2021 05:18:14 GMT",
"server": "APISIX/2.5",
"transfer-encoding": "chunked"
},
"size": 190,
"status": 200
},
"server": {
"hostname": "localhost",
"version": "2.5"
},
"latency": 0
}
```
- `origin`:
```http
GET /hello?ab=cd HTTP/1.1
host: localhost
content-length: 6
connection: close
abcdef
```
## Metadata
You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
| Name | Type | Required | Default | Description |
| ---------- | ------ | -------- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log_format | object | False | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
:::info IMPORTANT
Configuring the Plugin metadata is global in scope. This means that it will take effect on all Routes and Services which use the `kafka-logger` Plugin.
:::
The example below shows how you can configure through the Admin API:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/kafka-logger -H "X-API-KEY: $admin_key" -X PUT -d '
{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr"
}
}'
```
With this configuration, your logs would be formatted as shown below:
```shell
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
```
## Enable Plugin
The example below shows how you can enable the `kafka-logger` Plugin on a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/5 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"kafka-logger": {
"brokers" : [
{
"host" :"127.0.0.1",
"port" : 9092
}
],
"kafka_topic" : "test2",
"key" : "key1",
"batch_max_size": 1,
"name": "kafka logger"
}
},
"upstream": {
"nodes": {
"127.0.0.1:1980": 1
},
"type": "roundrobin"
},
"uri": "/hello"
}'
```
This Plugin also supports pushing to more than one broker at a time. You can specify multiple brokers in the Plugin configuration as shown below:
```json
"brokers" : [
{
"host" :"127.0.0.1",
"port" : 9092
},
{
"host" :"127.0.0.1",
"port" : 9093
}
],
```
## Example usage
Now, if you make a request to APISIX, it will be logged in your Kafka server:
```shell
curl -i http://127.0.0.1:9080/hello
```
## Delete Plugin
To remove the `kafka-logger` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,83 @@
---
title: kafka-proxy
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Kafka proxy
description: This document contains information about the Apache APISIX kafka-proxy Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `kafka-proxy` plugin can be used to configure advanced parameters for the kafka upstream of Apache APISIX, such as SASL authentication.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|-------------------|---------|----------|---------|---------------|------------------------------------|
| sasl | object | optional | | {"username": "user", "password" :"pwd"} | SASL/PLAIN authentication configuration, when this configuration exists, turn on SASL authentication; this object will contain two parameters username and password, they must be configured. |
| sasl.username | string | required | | | SASL/PLAIN authentication username |
| sasl.password | string | required | | | SASL/PLAIN authentication password |
NOTE: `encrypt_fields = {"sasl.password"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
:::note
If SASL authentication is enabled, the `sasl.username` and `sasl.password` must be set.
The current SASL authentication only supports PLAIN mode, which is the username password login method.
:::
## Example usage
When we use scheme as the upstream of kafka, we can add kafka authentication configuration to it through this plugin.
```shell
curl -X PUT 'http://127.0.0.1:9180/apisix/admin/routes/r1' \
-H 'X-API-KEY: <api-key>' \
-H 'Content-Type: application/json' \
-d '{
"uri": "/kafka",
"plugins": {
"kafka-proxy": {
"sasl": {
"username": "user",
"password": "pwd"
}
}
},
"upstream": {
"nodes": {
"kafka-server1:9092": 1,
"kafka-server2:9092": 1,
"kafka-server3:9092": 1
},
"type": "none",
"scheme": "kafka"
}
}'
```
Now, we can test it by connecting to the `/kafka` endpoint via websocket.
## Delete Plugin
To remove the `kafka-proxy` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.

View File

@@ -0,0 +1,571 @@
---
title: key-auth
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Key Auth
- key-auth
description: The key-auth Plugin supports the use of an authentication key as a mechanism for clients to authenticate themselves before accessing Upstream resources.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/key-auth" />
</head>
## Description
The `key-auth` Plugin supports the use of an authentication key as a mechanism for clients to authenticate themselves before accessing Upstream resources.
To use the plugin, you would configure authentication keys on [Consumers](../terminology/consumer.md) and enable the Plugin on routes or services. The key can be included in the request URL query string or request header. APISIX will then verify the key to determine if a request should be allowed or denied to access Upstream resources.
When a Consumer is successfully authenticated, APISIX adds additional headers, such as `X-Consumer-Username`, `X-Credential-Indentifier`, and other Consumer custom headers if configured, to the request, before proxying it to the Upstream service. The Upstream service will be able to differentiate between consumers and implement additional logics as needed. If any of these values is not available, the corresponding header will not be added.
## Attributes
For Consumer/Credential:
| Name | Type | Required | Description |
|------|--------|-------------|----------------------------|
| key | string | True | Unique key for a Consumer. This field supports saving the value in Secret Manager using the [APISIX Secret](../terminology/secret.md) resource. |
NOTE: `encrypt_fields = {"key"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
For Route:
| Name | Type | Required | Default | Description |
|--------|--------|-------------|-------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| header | string | False | apikey | The header to get the key from. |
| query | string | False | apikey | The query string to get the key from. Lower priority than header. |
| hide_credentials | boolean | False | false | If true, do not pass the header or query string with key to Upstream services. |
| anonymous_consumer | string | False | false | Anonymous Consumer name. If configured, allow anonymous users to bypass the authentication. |
## Examples
The examples below demonstrate how you can work with the `key-auth` Plugin for different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Implement Key Authentication on Route
The following example demonstrates how to implement key authentications on a Route and include the key in the request header.
Create a Consumer `jack`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jack"
}'
```
Create `key-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jack/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jack-key-auth",
"plugins": {
"key-auth": {
"key": "jack-key"
}
}
}'
```
Create a Route with `key-auth`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "key-auth-route",
"uri": "/anything",
"plugins": {
"key-auth": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
#### Verify with a Valid Key
Send a request to with the valid key:
```shell
curl -i "http://127.0.0.1:9080/anything" -H 'apikey: jack-key'
```
You should receive an `HTTP/1.1 200 OK` response.
#### Verify with an Invalid Key
Send a request with an invalid key:
```shell
curl -i "http://127.0.0.1:9080/anything" -H 'apikey: wrong-key'
```
You should see an `HTTP/1.1 401 Unauthorized` response with the following:
```text
{"message":"Invalid API key in request"}
```
#### Verify without a Key
Send a request to without a key:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should see an `HTTP/1.1 401 Unauthorized` response with the following:
```text
{"message":"Missing API key found in request"}
```
### Hide Authentication Information From Upstream
The following example demonstrates how to prevent the key from being sent to the Upstream services by configuring `hide_credentials`. By default, the authentication key is forwarded to the Upstream services, which might lead to security risks in some circumstances.
Create a Consumer `jack`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jack"
}'
```
Create `key-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jack/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jack-key-auth",
"plugins": {
"key-auth": {
"key": "jack-key"
}
}
}'
```
#### Without Hiding Credentials
Create a Route with `key-auth` and configure `hide_credentials` to `false`, which is the default configuration:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "key-auth-route",
"uri": "/anything",
"plugins": {
"key-auth": {
"hide_credentials": false
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request with the valid key:
```shell
curl -i "http://127.0.0.1:9080/anything?apikey=jack-key"
```
You should see an `HTTP/1.1 200 OK` response with the following:
```json
{
"args": {
"auth": "jack-key"
},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Host": "127.0.0.1",
"User-Agent": "curl/8.2.1",
"X-Consumer-Username": "jack",
"X-Credential-Identifier": "cred-jack-key-auth",
"X-Amzn-Trace-Id": "Root=1-6502d8a5-2194962a67aa21dd33f94bb2",
"X-Forwarded-Host": "127.0.0.1"
},
"json": null,
"method": "GET",
"origin": "127.0.0.1, 103.248.35.179",
"url": "http://127.0.0.1/anything?apikey=jack-key"
}
```
Note that the Credential `jack-key` is visible to the Upstream service.
#### Hide Credentials
Update the plugin's `hide_credentials` to `true`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/key-auth-route" -X PATCH \
-H "X-API-KEY: ${admin_key}" \
-d '{
"plugins": {
"key-auth": {
"hide_credentials": true
}
}
}'
```
Send a request with the valid key:
```shell
curl -i "http://127.0.0.1:9080/anything?apikey=jack-key"
```
You should see an `HTTP/1.1 200 OK` response with the following:
```json
{
"args": {},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Host": "127.0.0.1",
"User-Agent": "curl/8.2.1",
"X-Consumer-Username": "jack",
"X-Credential-Identifier": "cred-jack-key-auth",
"X-Amzn-Trace-Id": "Root=1-6502d85c-16f34dbb5629a5960183e803",
"X-Forwarded-Host": "127.0.0.1"
},
"json": null,
"method": "GET",
"origin": "127.0.0.1, 103.248.35.179",
"url": "http://127.0.0.1/anything"
}
```
Note that the Credential `jack-key` is no longer visible to the Upstream service.
### Demonstrate Priority of Keys in Header and Query
The following example demonstrates how to implement key authentication by consumers on a Route and customize the URL parameter that should include the key. The example also shows that when the API key is configured in both the header and the query string, the request header has a higher priority.
Create a Consumer `jack`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jack"
}'
```
Create `key-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jack/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jack-key-auth",
"plugins": {
"key-auth": {
"key": "jack-key"
}
}
}'
```
Create a Route with `key-auth`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "key-auth-route",
"uri": "/anything",
"plugins": {
"key-auth": {
"query": "auth"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
#### Verify with a Valid Key
Send a request to with the valid key:
```shell
curl -i "http://127.0.0.1:9080/anything?auth=jack-key"
```
You should receive an `HTTP/1.1 200 OK` response.
#### Verify with an Invalid Key
Send a request with an invalid key:
```shell
curl -i "http://127.0.0.1:9080/anything?auth=wrong-key"
```
You should see an `HTTP/1.1 401 Unauthorized` response with the following:
```text
{"message":"Invalid API key in request"}
```
#### Verify with a Valid Key in Query String
However, if you include the valid key in header with the invalid key still in the URL query string:
```shell
curl -i "http://127.0.0.1:9080/anything?auth=wrong-key" -H 'apikey: jack-key'
```
You should see an `HTTP/1.1 200 OK` response. This shows that the key included in the header always has a higher priority.
### Add Consumer Custom ID to Header
The following example demonstrates how you can attach a Consumer custom ID to authenticated request in the `Consumer-Custom-Id` header, which can be used to implement additional logics as needed.
Create a Consumer `jack` with a custom ID label:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jack",
"labels": {
"custom_id": "495aec6a"
}
}'
```
Create `key-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jack/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jack-key-auth",
"plugins": {
"key-auth": {
"key": "jack-key"
}
}
}'
```
Create a Route with `key-auth`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "key-auth-route",
"uri": "/anything",
"plugins": {
"key-auth": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
To verify, send a request to the Route with the valid key:
```shell
curl -i "http://127.0.0.1:9080/anything?auth=jack-key"
```
You should see an `HTTP/1.1 200 OK` response similar to the following:
```json
{
"args": {
"auth": "jack-key"
},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Host": "127.0.0.1",
"User-Agent": "curl/8.6.0",
"X-Amzn-Trace-Id": "Root=1-66ea8d64-33df89052ae198a706e18c2a",
"X-Consumer-Username": "jack",
"X-Credential-Identifier": "cred-jack-key-auth",
"X-Consumer-Custom-Id": "495aec6a",
"X-Forwarded-Host": "127.0.0.1"
},
"json": null,
"method": "GET",
"origin": "192.168.65.1, 205.198.122.37",
"url": "http://127.0.0.1/anything?apikey=jack-key"
}
```
### Rate Limit with Anonymous Consumer
The following example demonstrates how you can configure different rate limiting policies by regular and anonymous consumers, where the anonymous Consumer does not need to authenticate and has less quotas.
Create a regular Consumer `jack` and configure the `limit-count` Plugin to allow for a quota of 3 within a 30-second window:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jack",
"plugins": {
"limit-count": {
"count": 3,
"time_window": 30,
"rejected_code": 429
}
}
}'
```
Create the `key-auth` Credential for the Consumer `jack`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jack/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jack-key-auth",
"plugins": {
"key-auth": {
"key": "jack-key"
}
}
}'
```
Create an anonymous user `anonymous` and configure the `limit-count` Plugin to allow for a quota of 1 within a 30-second window:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "anonymous",
"plugins": {
"limit-count": {
"count": 1,
"time_window": 30,
"rejected_code": 429
}
}
}'
```
Create a Route and configure the `key-auth` Plugin to accept anonymous Consumer `anonymous` from bypassing the authentication:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "key-auth-route",
"uri": "/anything",
"plugins": {
"key-auth": {
"anonymous_consumer": "anonymous"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
To verify, send five consecutive requests with `jack`'s key:
```shell
resp=$(seq 5 | xargs -I{} curl "http://127.0.0.1:9080/anything" -H 'apikey: jack-key' -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200": $count_200, "429": $count_429
```
You should see the following response, showing that out of the 5 requests, 3 requests were successful (status code 200) while the others were rejected (status code 429).
```text
200: 3, 429: 2
```
Send five anonymous requests:
```shell
resp=$(seq 5 | xargs -I{} curl "http://127.0.0.1:9080/anything" -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200": $count_200, "429": $count_429
```
You should see the following response, showing that only one request was successful:
```text
200: 1, 429: 4
```

View File

@@ -0,0 +1,255 @@
---
title: lago
keywords:
- Apache APISIX
- API Gateway
- Plugin
- lago
- monetization
- github.com/getlago/lago
description: The lago plugin reports usage to a Lago instance, which allows users to integrate Lago with APISIX for API monetization.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `lago` plugin pushes requests and responses to [Lago Self-hosted](https://github.com/getlago/lago) and [Lago Cloud](https://getlago.com) via the Lago REST API. the plugin allows you to use it with a variety of APISIX built-in features, such as the APISIX consumer and the request-id plugin.
This allows for API monetization or let APISIX to be an AI gateway for AI tokens billing scenarios.
:::note disclaimer
Lago owns its trademarks and controls its commercial products and open source projects.
The [https://github.com/getlago/lago](https://github.com/getlago/lago) project uses the `AGPL-3.0` license instead of the `Apache-2.0` license that is the same as Apache APISIX. As a user, you will need to evaluate for yourself whether it is applicable to your business to use the project in a compliant way or to obtain another type of license from Lago. Apache APISIX community does not endorse it.
The plugin does not contain any proprietary code or SDKs from Lago, it is contributed by contributors to Apache APISIX and licensed under the `Apache-2.0` license, which is in line with any other part of APISIX and you don't need to worry about its compliance.
:::
When enabled, the plugin will collect information from the request context (e.g. event code, transaction ID, associated subscription ID) as configured and serialize them into [Event JSON objects](https://getlago.com/docs/api-reference/events/event-object) as required by Lago. They will be added to the buffer and sent to Lago in batches of up to 100. This batch size is a [requirement](https://getlago.com/docs/api-reference/events/batch) from Lago. If you want to modify it, see [batch processor](../batch-processor.md) for more details.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|---|---|---|---|---|---|
| endpoint_addrs | array[string] | True | | | Lago API address, such as `http://127.0.0.1:3000`. It supports both self-hosted Lago and Lago Cloud. If multiple endpoints are configured, the log will be pushed to a randomly selected endpoint from the list. |
| endpoint_uri | string | False | /api/v1/events/batch | | Lago API endpoint for [batch usage events](https://docs.getlago.com/api-reference/events/batch). |
| token | string | True | | | Lago API key created in the Lago dashboard. |
| event_transaction_id | string | True | | | Event's transaction ID, used to identify and de-duplicate the event. It supports string templates containing APISIX and NGINX variables, such as `req_${request_id}`, which allows you to use values returned by upstream services or the `request-id` plugin. |
| event_subscription_id | string | True | | | Event's subscription ID, which is automatically generated or configured when you assign the plan to the customer on Lago. This is used to associate API consumption to a customer subscription and supports string templates containing APISIX and NGINX variables, such as `cus_${consumer_name}`, which allows you to use values returned by upstream services or APISIX consumer. |
| event_code | string | True | | | Lago billable metric's code for associating an event to a specified billable item. |
| event_properties | object | False | | | Event's properties, used to attach information to an event. This allows you to send certain information on an event to Lago, such as the HTTP status to exclude failed requests from billing, or the AI token consumption in the response body for accurate billing. The keys are fixed strings, while the values can be string templates containing APISIX and NGINX variables, such as `${status}`. |
| ssl_verify | boolean | False | true | | If true, verify Lago's SSL certificates. |
| timeout | integer | False | 3000 | [1, 60000] | Timeout for the Lago service HTTP call in milliseconds. |
| keepalive | boolean | False | true | | If true, keep the connection alive for multiple requests. |
| keepalive_timeout | integer | False | 60000 | >=1000 | Keepalive timeout in milliseconds. |
| keepalive_pool | integer | False | 5 | >=1 | Maximum number of connections in the connection pool. |
This Plugin supports using batch processors to aggregate and process events in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
## Examples
The examples below demonstrate how you can configure `lago` Plugin for typical scenario.
To follow along the examples, start a Lago instance. Refer to [https://github.com/getlago/lago](https://github.com/getlago/lago) or use Lago Cloud.
Follow these brief steps to configure Lago:
1. Get the Lago API Key (also known as `token`), from the __Developer__ page of the Lago dashboard.
2. Next, create a billable metric used by APISIX, assuming its code is `test`. Set the `Aggregation type` to `Count`; and add a filter with a key of `tier` whose value contains `expensive` to allow us to distinguish between API values, which will be demonstrated later.
3. Create a plan and add the created metric to it. Its code can be configured however you like. In the __Usage-based charges__ section, add the billable metric created previously as a `Metered charge` item. Specify the default price as `$1`. Add a filter, use `tier: expensive` to perform the filtering, and specify its price as `$10`.
4. Select an existing consumer or create a new one to assign the plan you just created. You need to specify a `Subscription external ID` (or you can have Lago generate it), which will be used as the APISIX consumer username.
Next we need to configure APISIX for demonstrations.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Report API call usage
The following example demonstrates how you can configure the `lago` Plugin on a Route to measuring API call usage.
Create a Route with the `lago`, `request-id`, `key-auth` Plugins as such:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "lago-route-1",
"uri": "/get",
"plugins": {
"request-id": {
"include_in_response": true
},
"key-auth": {},
"lago": {
"endpoint_addrs": ["http://12.0.0.1:3000"],
"token": "<Get token from Lago dashboard>",
"event_transaction_id": "${http_x_request_id}",
"event_subscription_id": "${http_x_consumer_username}",
"event_code": "test"
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Create a second route with the `lago`, `request-id`, `key-auth` Plugin as such:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "lago-route-2",
"uri": "/anything",
"plugins": {
"request-id": {
"include_in_response": true
},
"key-auth": {},
"lago": {
"endpoint_addrs": ["http://12.0.0.1:3000"],
"token": "<Get token from Lago dashboard>",
"event_transaction_id": "${http_x_request_id}",
"event_subscription_id": "${http_x_consumer_username}",
"event_code": "test",
"event_properties": {
"tier": "expensive"
}
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Create a Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "<Lago subscription external ID>",
"plugins": {
"key-auth": {
"key": "demo"
}
}
}'
```
Send three requests to the two routes respectively:
```shell
curl "http://127.0.0.1:9080/get"
curl "http://127.0.0.1:9080/get"
curl "http://127.0.0.1:9080/get"
curl "http://127.0.0.1:9080/anything"
curl "http://127.0.0.1:9080/anything"
curl "http://127.0.0.1:9080/anything"
```
You should receive `HTTP/1.1 200 OK` responses for all requests.
Wait a few seconds, then navigate to the __Developer__ page in the Lago dashboard. Under __Events__, you should see 6 event entries sent by APISIX.
If the self-hosted instance's event worker is configured correctly (or if you're using Lago Cloud), you can also see the total amount consumed in real time in the consumer's subscription usage, which should be `3 * $1 + 3 * $10 = $33` according to our demo use case.
## FAQ
### Purpose of the Plugin
When you make an effort to monetize your API, it's hard to find a ready-made, low-cost solution, so you may have to build your own billing stack, which is complicated.
This plugin allows you to use APISIX to handle API proxies and use Lago as a billing stack through direct integration with Lago, and both the APISIX open source project and Lago will be part of your portfolio, which is a huge time saver.
Every API call results in a Lago event, which allows you to bill users for real usage, i.e. pay-as-you-go, and thanks to our built-in transaction ID (request ID) support, you can simply implement API call logging and troubleshooting for your customers.
In addition to typical API monetization scenarios, APISIX can also do AI tokens-based billing when it is acting as an AI gateway, where each Lago event generated by an API request includes exactly how many tokens were consumed, to allow you to charge the user for a fine-grained per-tokens usage.
### Is it flexible?
Of course, the fact that we make transaction ID, subscription ID as a configuration item and allow you to use APISIX and NGINX variables in it means that it's simple to integrate the plugin with any existing or your own authentication and internal services.
- Use custom authentication: as long as the Lago subscription ID represented by the user ID is registered as an APISIX variable, it will be available from there, so custom authentication is completely possible!
- Integration with internal services: You might not need the APISIX built-in request-id plugin. That's OK. You can have your internal service (APISIX upstream) generate it and include it in the HTTP response header. Then you can access it via an NGINX variable in the transaction ID.
Event properties are supported, allowing you to set special values for specific APIs. For example, if your service has 100 APIs, you can enable general billing for all of them while customizing a few with different pricing—just as demonstrated above.
### Which Lago versions does it work with?
When we first developed the Lago plugin, it was released to `1.17.0`, which we used for integration, so it works at least with `1.17.0`.
Technically, we use the Lago batch event API to submit events in batches, and APISIX will only use this API, so as long as Lago doesn't make any disruptive changes to this API, APISIX will be able to integrate with it.
Here's an [archive page](https://web.archive.org/web/20250516073803/https://getlago.com/docs/api-reference/events/batch) of the API documentation, which allows you to check the differences between the API at the time of our integration and the latest API.
If the latest API changes, you can submit an issue to inform the APISIX maintainers that this may require some changes.
### Why Lago can't receive events?
Look at `error.log` for such a log.
```text
2023/04/30 13:45:46 [error] 19381#19381: *1075673 [lua] batch-processor.lua:95: Batch Processor[lago logger] failed to process entries: lago api returned status: 400, body: <error message>, context: ngx.timer, client: 127.0.0.1, server: 0.0.0.0:9080
```
The error can be diagnosed based on the error code in the `failed to process entries: lago api returned status: 400, body: <error message>` and the response body of the lago server.
### Reliability of reporting
The plugin may encounter a network problem that prevents the node where the gateway is located from communicating with the Lago API, in which case APISIX will discard the batch according to the [batch processor](../batch-processor.md) configuration, the batch will be discarded if the specified number of retries are made and the dosage still cannot be sent.
Discarded events are permanently lost, so it is recommended that you use this plugin in conjunction with other logging mechanisms and perform event replay after Lago is unavailable causing data to be discarded to ensure that all logs are correctly sent to Lago.
### Will the event duplicate?
While APISIX performs retries based on the [batch processor](../batch-processor.md) configuration, you don't need to worry about duplicate events being reported to Lago.
The `event_transcation_id` and `timestamp` are generated and logged after the request is processed on the APISIX side, and Lago de-duplicates the event based on them.
So even if a retry is triggered because the network causes Lago to send a `success` response that is not received by APISIX, the event is still not duplicated on Lago.
### Performance Impacts
The plugin is logically simple and reliable; it simply builds a Lago event object for each request, buffers and sends them in bulk. The logic is not coupled to the request proxy path, so this does not cause latency to rise for requests going through the gateway.
Technically, the logic is executed in the NGINX log phase and [batch processor](../batch-processor.md) timer, so this does not affect the request itself.
### Resource overhead
As explained earlier in the performance impact section, the plugin doesn't cause a significant increase in system resources. It only uses a small amount of memory to store events for batching.

View File

@@ -0,0 +1,168 @@
---
title: ldap-auth
keywords:
- Apache APISIX
- API Gateway
- Plugin
- LDAP Authentication
- ldap-auth
description: This document contains information about the Apache APISIX ldap-auth Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `ldap-auth` Plugin can be used to add LDAP authentication to a Route or a Service.
This Plugin works with the Consumer object and the consumers of the API can authenticate with an LDAP server using [basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication).
This Plugin uses [lua-resty-ldap](https://github.com/api7/lua-resty-ldap) for connecting with an LDAP server.
## Attributes
For Consumer:
| Name | Type | Required | Description |
| ------- | ------ | -------- | -------------------------------------------------------------------------------- |
| user_dn | string | True | User dn of the LDAP client. For example, `cn=user01,ou=users,dc=example,dc=org`. This field supports saving the value in Secret Manager using the [APISIX Secret](../terminology/secret.md) resource. |
For Route:
| Name | Type | Required | Default | Description |
|----------|---------|----------|---------|------------------------------------------------------------------------|
| base_dn | string | True | | Base dn of the LDAP server. For example, `ou=users,dc=example,dc=org`. |
| ldap_uri | string | True | | URI of the LDAP server. |
| use_tls | boolean | False | `false` | If set to `true` uses TLS. |
| tls_verify| boolean | False | `false` | Whether to verify the server certificate when `use_tls` is enabled; If set to `true`, you must set `ssl_trusted_certificate` in `config.yaml`, and make sure the host of `ldap_uri` matches the host in server certificate. |
| uid | string | False | `cn` | uid attribute. |
## Enable plugin
First, you have to create a Consumer and enable the `ldap-auth` Plugin on it:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/consumers -H "X-API-KEY: $admin_key" -X PUT -d '
{
"username": "foo",
"plugins": {
"ldap-auth": {
"user_dn": "cn=user01,ou=users,dc=example,dc=org"
}
}
}'
```
Now you can enable the Plugin on a specific Route or a Service as shown below:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {
"ldap-auth": {
"base_dn": "ou=users,dc=example,dc=org",
"ldap_uri": "localhost:1389",
"uid": "cn"
},
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```
## Example usage
After configuring the Plugin as mentioned above, clients can make requests with authorization to access the API:
```shell
curl -i -uuser01:password1 http://127.0.0.1:9080/hello
```
```shell
HTTP/1.1 200 OK
...
hello, world
```
If an authorization header is missing or invalid, the request is denied:
```shell
curl -i http://127.0.0.1:9080/hello
```
```shell
HTTP/1.1 401 Unauthorized
...
{"message":"Missing authorization in request"}
```
```shell
curl -i -uuser:password1 http://127.0.0.1:9080/hello
```
```shell
HTTP/1.1 401 Unauthorized
...
{"message":"Invalid user authorization"}
```
```shell
curl -i -uuser01:passwordfalse http://127.0.0.1:9080/hello
```
```shell
HTTP/1.1 401 Unauthorized
...
{"message":"Invalid user authorization"}
```
## Delete Plugin
To remove the `ldap-auth` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,420 @@
---
title: limit-conn
keywords:
- Apache APISIX
- API Gateway
- Limit Connection
description: The limit-conn plugin restricts the rate of requests by managing concurrent connections. Requests exceeding the threshold may be delayed or rejected, ensuring controlled API usage and preventing overload.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/limit-conn" />
</head>
## Description
The `limit-conn` Plugin limits the rate of requests by the number of concurrent connections. Requests exceeding the threshold will be delayed or rejected based on the configuration, ensuring controlled resource usage and preventing overload.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|------------|---------|----------|-------------|-------------------|-----------------|
| conn | integer | True | | > 0 | The maximum number of concurrent requests allowed. Requests exceeding the configured limit and below `conn + burst` will be delayed. |
| burst | integer | True | | >= 0 | The number of excessive concurrent requests allowed to be delayed per second. Requests exceeding the limit will be rejected immediately. |
| default_conn_delay | number | True | | > 0 | Processing latency allowed in seconds for concurrent requests exceeding `conn + burst`, which can be dynamically adjusted based on `only_use_default_delay` setting. |
| only_use_default_delay | boolean | False | false | | If false, delay requests proportionally based on how much they exceed the `conn` limit. The delay grows larger as congestion increases. For instance, with `conn` being `5`, `burst` being `3`, and `default_conn_delay` being `1`, 6 concurrent requests would result in a 1-second delay, 7 requests a 2-second delay, 8 requests a 3-second delay, and so on, until the total limit of `conn + burst` is reached, beyond which requests are rejected. If true, use `default_conn_delay` to delay all excessive requests within the `burst` range. Requests beyond `conn + burst` are rejected immediately. For instance, with `conn` being `5`, `burst` being `3`, and `default_conn_delay` being `1`, 6, 7, or 8 concurrent requests are all delayed by exactly 1 second each. |
| key_type | string | False | var | ["var","var_combination"] | The type of key. If the `key_type` is `var`, the `key` is interpreted a variable. If the `key_type` is `var_combination`, the `key` is interpreted as a combination of variables. |
| key | string | False | remote_addr | | The key to count requests by. If the `key_type` is `var`, the `key` is interpreted a variable. The variable does not need to be prefixed by a dollar sign (`$`). If the `key_type` is `var_combination`, the `key` is interpreted as a combination of variables. All variables should be prefixed by dollar signs (`$`). For example, to configure the `key` to use a combination of two request headers `custom-a` and `custom-b`, the `key` should be configured as `$http_custom_a $http_custom_b`. |
| rejected_code | integer | False | 503 | [200,...,599] | The HTTP status code returned when a request is rejected for exceeding the threshold. |
| rejected_msg | string | False | | non-empty | The response body returned when a request is rejected for exceeding the threshold. |
| allow_degradation | boolean | False | false | | If true, allow APISIX to continue handling requests without the Plugin when the Plugin or its dependencies become unavailable. |
| policy | string | False | local | ["local","redis","redis-cluster"] | The policy for rate limiting counter. If it is `local`, the counter is stored in memory locally. If it is `redis`, the counter is stored on a Redis instance. If it is `redis-cluster`, the counter is stored in a Redis cluster. |
| redis_host | string | False | | | The address of the Redis node. Required when `policy` is `redis`. |
| redis_port | integer | False | 6379 | [1,...] | The port of the Redis node when `policy` is `redis`. |
| redis_username | string | False | | | The username for Redis if Redis ACL is used. If you use the legacy authentication method `requirepass`, configure only the `redis_password`. Used when `policy` is `redis`. |
| redis_password | string | False | | | The password of the Redis node when `policy` is `redis` or `redis-cluster`. |
| redis_ssl | boolean | False | false | | If true, use SSL to connect to Redis cluster when `policy` is `redis`. |
| redis_ssl_verify | boolean | False | false | | If true, verify the server SSL certificate when `policy` is `redis`. |
| redis_database | integer | False | 0 | >= 0 | The database number in Redis when `policy` is `redis`. |
| redis_timeout | integer | False | 1000 | [1,...] | The Redis timeout value in milliseconds when `policy` is `redis` or `redis-cluster`. |
| redis_cluster_nodes | array[string] | False | | | The list of the Redis cluster nodes with at least two addresses. Required when policy is redis-cluster. |
| redis_cluster_name | string | False | | | The name of the Redis cluster. Required when `policy` is `redis-cluster`. |
| redis_cluster_ssl | boolean | False | false | | If true, use SSL to connect to Redis cluster when `policy` is |
| redis_cluster_ssl_verify | boolean | False | false | | If true, verify the server SSL certificate when `policy` is `redis-cluster`. |
## Examples
The examples below demonstrate how you can configure `limit-conn` in different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Apply Rate Limiting by Remote Address
The following example demonstrates how to use `limit-conn` to rate limit requests by `remote_addr`, with example connection and burst thresholds.
Create a Route with `limit-conn` Plugin to allow 2 concurrent requests and 1 excessive concurrent request. Additionally:
* Configure the Plugin to allow 0.1 second of processing latency for concurrent requests exceeding `conn + burst`.
* Set the key type to `vars` to interpret `key` as a variable.
* Calculate rate limiting count by request's `remote_address`.
* Set `policy` to `local` to use the local counter in memory.
* Customize the `rejected_code` to `429`.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "limit-conn-route",
"uri": "/get",
"plugins": {
"limit-conn": {
"conn": 2,
"burst": 1,
"default_conn_delay": 0.1,
"key_type": "var",
"key": "remote_addr",
"policy": "local",
"rejected_code": 429
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send five concurrent requests to the route:
```shell
seq 1 5 | xargs -n1 -P5 bash -c 'curl -s -o /dev/null -w "Response: %{http_code}\n" "http://127.0.0.1:9080/get"'
```
You should see responses similar to the following, where excessive requests are rejected:
```text
Response: 200
Response: 200
Response: 200
Response: 429
Response: 429
```
### Apply Rate Limiting by Remote Address and Consumer Name
The following example demonstrates how to use `limit-conn` to rate limit requests by a combination of variables, `remote_addr` and `consumer_name`.
Create a Consumer `john`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "john"
}'
```
Create `key-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/john/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-key-auth",
"plugins": {
"key-auth": {
"key": "john-key"
}
}
}'
```
Create a second Consumer `jane`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jane"
}'
```
Create `key-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jane/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jane-key-auth",
"plugins": {
"key-auth": {
"key": "jane-key"
}
}
}'
```
Create a Route with `key-auth` and `limit-conn` Plugins, and specify in the `limit-conn` Plugin to use a combination of variables as the rate limiting key:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "limit-conn-route",
"uri": "/get",
"plugins": {
"key-auth": {},
"limit-conn": {
"conn": 2,
"burst": 1,
"default_conn_delay": 0.1,
"rejected_code": 429,
"key_type": "var_combination",
"key": "$remote_addr $consumer_name"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send five concurrent requests as the Consumer `john`:
```shell
seq 1 5 | xargs -n1 -P5 bash -c 'curl -s -o /dev/null -w "Response: %{http_code}\n" "http://127.0.0.1:9080/get" -H "apikey: john-key"'
```
You should see responses similar to the following, where excessive requests are rejected:
```text
Response: 200
Response: 200
Response: 200
Response: 429
Response: 429
```
Immediately send five concurrent requests as the Consumer `jane`:
```shell
seq 1 5 | xargs -n1 -P5 bash -c 'curl -s -o /dev/null -w "Response: %{http_code}\n" "http://127.0.0.1:9080/get" -H "apikey: jane-key"'
```
You should also see responses similar to the following, where excessive requests are rejected:
```text
Response: 200
Response: 200
Response: 200
Response: 429
Response: 429
```
### Rate Limit WebSocket Connections
The following example demonstrates how you can use the `limit-conn` Plugin to limit the number of concurrent WebSocket connections.
Start a [sample upstream WebSocket server](https://hub.docker.com/r/jmalloc/echo-server):
```shell
docker run -d \
-p 8080:8080 \
--name websocket-server \
--network=apisix-quickstart-net \
jmalloc/echo-server
```
Create a Route to the server WebSocket endpoint and enable WebSocket for the route. Adjust the WebSocket server address accordingly.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT -d '
{
"id": "ws-route",
"uri": "/.ws",
"plugins": {
"limit-conn": {
"conn": 2,
"burst": 1,
"default_conn_delay": 0.1,
"key_type": "var",
"key": "remote_addr",
"rejected_code": 429
}
},
"enable_websocket": true,
"upstream": {
"type": "roundrobin",
"nodes": {
"websocket-server:8080": 1
}
}
}'
```
Install a WebSocket client, such as [websocat](https://github.com/vi/websocat), if you have not already. Establish connection with the WebSocket server through the route:
```shell
websocat "ws://127.0.0.1:9080/.ws"
```
Send a "hello" message in the terminal, you should see the WebSocket server echoes back the same message:
```text
Request served by 1cd244052136
hello
hello
```
Open three more terminal sessions and run:
```shell
websocat "ws://127.0.0.1:9080/.ws"
```
You should see the last terminal session prints `429 Too Many Requests` when you try to establish a WebSocket connection with the server, due to the rate limiting effect.
### Share Quota Among APISIX Nodes with a Redis Server
The following example demonstrates the rate limiting of requests across multiple APISIX nodes with a Redis server, such that different APISIX nodes share the same rate limiting quota.
On each APISIX instance, create a Route with the following configurations. Adjust the address of the Admin API, Redis host, port, password, and database accordingly.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "limit-conn-route",
"uri": "/get",
"plugins": {
"limit-conn": {
"conn": 1,
"burst": 1,
"default_conn_delay": 0.1,
"rejected_code": 429,
"key_type": "var",
"key": "remote_addr",
"policy": "redis",
"redis_host": "192.168.xxx.xxx",
"redis_port": 6379,
"redis_password": "p@ssw0rd",
"redis_database": 1
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send five concurrent requests to the route:
```shell
seq 1 5 | xargs -n1 -P5 bash -c 'curl -s -o /dev/null -w "Response: %{http_code}\n" "http://127.0.0.1:9080/get"'
```
You should see responses similar to the following, where excessive requests are rejected:
```text
Response: 200
Response: 200
Response: 429
Response: 429
Response: 429
```
This shows the two routes configured in different APISIX instances share the same quota.
### Share Quota Among APISIX Nodes with a Redis Cluster
You can also use a Redis cluster to apply the same quota across multiple APISIX nodes, such that different APISIX nodes share the same rate limiting quota.
Ensure that your Redis instances are running in [cluster mode](https://redis.io/docs/management/scaling/#create-and-use-a-redis-cluster). A minimum of two nodes are required for the `limit-conn` Plugin configurations.
On each APISIX instance, create a Route with the following configurations. Adjust the address of the Admin API, Redis cluster nodes, password, cluster name, and SSL varification accordingly.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "limit-conn-route",
"uri": "/get",
"plugins": {
"limit-conn": {
"conn": 1,
"burst": 1,
"default_conn_delay": 0.1,
"rejected_code": 429,
"key_type": "var",
"key": "remote_addr",
"policy": "redis-cluster",
"redis_cluster_nodes": [
"192.168.xxx.xxx:6379",
"192.168.xxx.xxx:16379"
],
"redis_password": "p@ssw0rd",
"redis_cluster_name": "redis-cluster-1",
"redis_cluster_ssl": true
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send five concurrent requests to the route:
```shell
seq 1 5 | xargs -n1 -P5 bash -c 'curl -s -o /dev/null -w "Response: %{http_code}\n" "http://127.0.0.1:9080/get"'
```
You should see responses similar to the following, where excessive requests are rejected:
```text
Response: 200
Response: 200
Response: 429
Response: 429
Response: 429
```
This shows the two routes configured in different APISIX instances share the same quota.

View File

@@ -0,0 +1,507 @@
---
title: limit-count
keywords:
- Apache APISIX
- API Gateway
- Limit Count
description: The limit-count plugin uses a fixed window algorithm to limit the rate of requests by the number of requests within a given time interval. Requests exceeding the configured quota will be rejected.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/limit-count" />
</head>
## Description
The `limit-count` plugin uses a fixed window algorithm to limit the rate of requests by the number of requests within a given time interval. Requests exceeding the configured quota will be rejected.
You may see the following rate limiting headers in the response:
* `X-RateLimit-Limit`: the total quota
* `X-RateLimit-Remaining`: the remaining quota
* `X-RateLimit-Reset`: number of seconds left for the counter to reset
## Attributes
| Name | Type | Required | Default | Valid values | Description |
| ----------------------- | ------- | ----------------------------------------- | ------------- | -------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| count | integer | True | | > 0 | The maximum number of requests allowed within a given time interval. |
| time_window | integer | True | | > 0 | The time interval corresponding to the rate limiting `count` in seconds. |
| key_type | string | False | var | ["var","var_combination","constant"] | The type of key. If the `key_type` is `var`, the `key` is interpreted a variable. If the `key_type` is `var_combination`, the `key` is interpreted as a combination of variables. If the `key_type` is `constant`, the `key` is interpreted as a constant. |
| key | string | False | remote_addr | | The key to count requests by. If the `key_type` is `var`, the `key` is interpreted a variable. The variable does not need to be prefixed by a dollar sign (`$`). If the `key_type` is `var_combination`, the `key` is interpreted as a combination of variables. All variables should be prefixed by dollar signs (`$`). For example, to configure the `key` to use a combination of two request headers `custom-a` and `custom-b`, the `key` should be configured as `$http_custom_a $http_custom_b`. If the `key_type` is `constant`, the `key` is interpreted as a constant value. |
| rejected_code | integer | False | 503 | [200,...,599] | The HTTP status code returned when a request is rejected for exceeding the threshold. |
| rejected_msg | string | False | | non-empty | The response body returned when a request is rejected for exceeding the threshold. |
| policy | string | False | local | ["local","redis","redis-cluster"] | The policy for rate limiting counter. If it is `local`, the counter is stored in memory locally. If it is `redis`, the counter is stored on a Redis instance. If it is `redis-cluster`, the counter is stored in a Redis cluster. |
| allow_degradation | boolean | False | false | | If true, allow APISIX to continue handling requests without the plugin when the plugin or its dependencies become unavailable. |
| show_limit_quota_header | boolean | False | true | | If true, include `X-RateLimit-Limit` to show the total quota and `X-RateLimit-Remaining` to show the remaining quota in the response header. |
| group | string | False | | non-empty | The `group` ID for the plugin, such that routes of the same `group` can share the same rate limiting counter. |
| redis_host | string | False | | | The address of the Redis node. Required when `policy` is `redis`. |
| redis_port | integer | False | 6379 | [1,...] | The port of the Redis node when `policy` is `redis`. |
| redis_username | string | False | | | The username for Redis if Redis ACL is used. If you use the legacy authentication method `requirepass`, configure only the `redis_password`. Used when `policy` is `redis`. |
| redis_password | string | False | | | The password of the Redis node when `policy` is `redis` or `redis-cluster`. |
| redis_ssl | boolean | False | false | | If true, use SSL to connect to Redis cluster when `policy` is `redis`. |
| redis_ssl_verify | boolean | False | false | | If true, verify the server SSL certificate when `policy` is `redis`. |
| redis_database | integer | False | 0 | >= 0 | The database number in Redis when `policy` is `redis`. |
| redis_timeout | integer | False | 1000 | [1,...] | The Redis timeout value in milliseconds when `policy` is `redis` or `redis-cluster`. |
| redis_cluster_nodes | array[string] | False | | | The list of the Redis cluster nodes with at least two addresses. Required when policy is redis-cluster. |
| redis_cluster_name | string | False | | | The name of the Redis cluster. Required when `policy` is `redis-cluster`. |
| redis_cluster_ssl | boolean | False | false | | If true, use SSL to connect to Redis cluster when `policy` is `redis-cluster`. |
| redis_cluster_ssl_verify | boolean | False | false | | If true, verify the server SSL certificate when `policy` is `redis-cluster`. |
## Examples
The examples below demonstrate how you can configure `limit-count` in different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Apply Rate Limiting by Remote Address
The following example demonstrates the rate limiting of requests by a single variable, `remote_addr`.
Create a Route with `limit-count` plugin that allows for a quota of 1 within a 30-second window per remote address:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "limit-count-route",
"uri": "/get",
"plugins": {
"limit-count": {
"count": 1,
"time_window": 30,
"rejected_code": 429,
"key_type": "var",
"key": "remote_addr"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to verify:
```shell
curl -i "http://127.0.0.1:9080/get"
```
You should see an `HTTP/1.1 200 OK` response.
The request has consumed all the quota allowed for the time window. If you send the request again within the same 30-second time interval, you should receive an `HTTP/1.1 429 Too Many Requests` response, indicating the request surpasses the quota threshold.
### Apply Rate Limiting by Remote Address and Consumer Name
The following example demonstrates the rate limiting of requests by a combination of variables, `remote_addr` and `consumer_name`. It allows for a quota of 1 within a 30-second window per remote address and for each consumer.
Create a Consumer `john`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "john"
}'
```
Create `key-auth` Credential for the consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/john/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-key-auth",
"plugins": {
"key-auth": {
"key": "john-key"
}
}
}'
```
Create a second Consumer `jane`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jane"
}'
```
Create `key-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jane/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jane-key-auth",
"plugins": {
"key-auth": {
"key": "jane-key"
}
}
}'
```
Create a Route with `key-auth` and `limit-count` plugins, and specify in the `limit-count` plugin to use a combination of variables as the rate limiting key:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "limit-count-route",
"uri": "/get",
"plugins": {
"key-auth": {},
"limit-count": {
"count": 1,
"time_window": 30,
"rejected_code": 429,
"key_type": "var_combination",
"key": "$remote_addr $consumer_name"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request as the Consumer `jane`:
```shell
curl -i "http://127.0.0.1:9080/get" -H 'apikey: jane-key'
```
You should see an `HTTP/1.1 200 OK` response with the corresponding response body.
This request has consumed all the quota set for the time window. If you send the same request as the Consumer `jane` within the same 30-second time interval, you should receive an `HTTP/1.1 429 Too Many Requests` response, indicating the request surpasses the quota threshold.
Send the same request as the Consumer `john` within the same 30-second time interval:
```shell
curl -i "http://127.0.0.1:9080/get" -H 'apikey: john-key'
```
You should see an `HTTP/1.1 200 OK` response with the corresponding response body, indicating the request is not rate limited.
Send the same request as the Consumer `john` again within the same 30-second time interval, you should receive an `HTTP/1.1 429 Too Many Requests` response.
This verifies the plugin rate limits by the combination of variables, `remote_addr` and `consumer_name`.
### Share Quota among Routes
The following example demonstrates the sharing of rate limiting quota among multiple routes by configuring the `group` of the `limit-count` plugin.
Note that the configurations of the `limit-count` plugin of the same `group` should be identical. To avoid update anomalies and repetitive configurations, you can create a Service with `limit-count` plugin and Upstream for routes to connect to.
Create a service:
```shell
curl "http://127.0.0.1:9180/apisix/admin/services" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "limit-count-service",
"plugins": {
"limit-count": {
"count": 1,
"time_window": 30,
"rejected_code": 429,
"group": "srv1"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Create two Routes and configure their `service_id` to be `limit-count-service`, so that they share the same configurations for the Plugin and Upstream:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "limit-count-route-1",
"service_id": "limit-count-service",
"uri": "/get1",
"plugins": {
"proxy-rewrite": {
"uri": "/get"
}
}
}'
```
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "limit-count-route-2",
"service_id": "limit-count-service",
"uri": "/get2",
"plugins": {
"proxy-rewrite": {
"uri": "/get"
}
}
}'
```
:::note
The [`proxy-rewrite`](./proxy-rewrite.md) plugin is used to rewrite the URI to `/get` so that requests are forwarded to the correct endpoint.
:::
Send a request to Route `/get1`:
```shell
curl -i "http://127.0.0.1:9080/get1"
```
You should see an `HTTP/1.1 200 OK` response with the corresponding response body.
Send the same request to Route `/get2` within the same 30-second time interval:
```shell
curl -i "http://127.0.0.1:9080/get2"
```
You should receive an `HTTP/1.1 429 Too Many Requests` response, which verifies the two routes share the same rate limiting quota.
### Share Quota Among APISIX Nodes with a Redis Server
The following example demonstrates the rate limiting of requests across multiple APISIX nodes with a Redis server, such that different APISIX nodes share the same rate limiting quota.
On each APISIX instance, create a Route with the following configurations. Adjust the address of the Admin API, Redis host, port, password, and database accordingly.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "limit-count-route",
"uri": "/get",
"plugins": {
"limit-count": {
"count": 1,
"time_window": 30,
"rejected_code": 429,
"key": "remote_addr",
"policy": "redis",
"redis_host": "192.168.xxx.xxx",
"redis_port": 6379,
"redis_password": "p@ssw0rd",
"redis_database": 1
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to an APISIX instance:
```shell
curl -i "http://127.0.0.1:9080/get"
```
You should see an `HTTP/1.1 200 OK` response with the corresponding response body.
Send the same request to a different APISIX instance within the same 30-second time interval, you should receive an `HTTP/1.1 429 Too Many Requests` response, verifying routes configured in different APISIX nodes share the same quota.
### Share Quota Among APISIX Nodes with a Redis Cluster
You can also use a Redis cluster to apply the same quota across multiple APISIX nodes, such that different APISIX nodes share the same rate limiting quota.
Ensure that your Redis instances are running in [cluster mode](https://redis.io/docs/management/scaling/#create-and-use-a-redis-cluster). A minimum of two nodes are required for the `limit-count` plugin configurations.
On each APISIX instance, create a Route with the following configurations. Adjust the address of the Admin API, Redis cluster nodes, password, cluster name, and SSL varification accordingly.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "limit-count-route",
"uri": "/get",
"plugins": {
"limit-count": {
"count": 1,
"time_window": 30,
"rejected_code": 429,
"key": "remote_addr",
"policy": "redis-cluster",
"redis_cluster_nodes": [
"192.168.xxx.xxx:6379",
"192.168.xxx.xxx:16379"
],
"redis_password": "p@ssw0rd",
"redis_cluster_name": "redis-cluster-1",
"redis_cluster_ssl": true
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to an APISIX instance:
```shell
curl -i "http://127.0.0.1:9080/get"
```
You should see an `HTTP/1.1 200 OK` response with the corresponding response body.
Send the same request to a different APISIX instance within the same 30-second time interval, you should receive an `HTTP/1.1 429 Too Many Requests` response, verifying routes configured in different APISIX nodes share the same quota.
### Rate Limit with Anonymous Consumer
does not need to authenticate and has less quotas. While this example uses [`key-auth`](./key-auth.md) for authentication, the anonymous Consumer can also be configured with [`basic-auth`](./basic-auth.md), [`jwt-auth`](./jwt-auth.md), and [`hmac-auth`](./hmac-auth.md).
Create a regular Consumer `john` and configure the `limit-count` plugin to allow for a quota of 3 within a 30-second window:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "john",
"plugins": {
"limit-count": {
"count": 3,
"time_window": 30,
"rejected_code": 429
}
}
}'
```
Create the `key-auth` Credential for the Consumer `john`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/john/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-key-auth",
"plugins": {
"key-auth": {
"key": "john-key"
}
}
}'
```
Create an anonymous user `anonymous` and configure the `limit-count` Plugin to allow for a quota of 1 within a 30-second window:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "anonymous",
"plugins": {
"limit-count": {
"count": 1,
"time_window": 30,
"rejected_code": 429
}
}
}'
```
Create a Route and configure the `key-auth` Plugin to accept anonymous Consumer `anonymous` from bypassing the authentication:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "key-auth-route",
"uri": "/anything",
"plugins": {
"key-auth": {
"anonymous_consumer": "anonymous"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
To verify, send five consecutive requests with `john`'s key:
```shell
resp=$(seq 5 | xargs -I{} curl "http://127.0.0.1:9080/anything" -H 'apikey: john-key' -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200": $count_200, "429": $count_429
```
You should see the following response, showing that out of the 5 requests, 3 requests were successful (status code 200) while the others were rejected (status code 429).
```text
200: 3, 429: 2
```
Send five anonymous requests:
```shell
resp=$(seq 5 | xargs -I{} curl "http://127.0.0.1:9080/anything" -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200": $count_200, "429": $count_429
```
You should see the following response, showing that only one request was successful:
```text
200: 1, 429: 4
```

View File

@@ -0,0 +1,284 @@
---
title: limit-req
keywords:
- Apache APISIX
- API Gateway
- Limit Request
- limit-req
description: The limit-req Plugin uses the leaky bucket algorithm to rate limit the number of the requests and allow for throttling.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/limit-req" />
</head>
## Description
The `limit-req` Plugin uses the [leaky bucket](https://en.wikipedia.org/wiki/Leaky_bucket) algorithm to rate limit the number of the requests and allow for throttling.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|-------------------|---------|----------|---------|----------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| rate | integer | True | | > 0 | The maximum number of requests allowed per second. Requests exceeding the rate and below burst will be delayed. |
| burst | integer | True | | >= 0 | The number of requests allowed to be delayed per second for throttling. Requests exceeding the rate and burst will get rejected. |
| key_type | string | False | var | ["var", "var_combination"] | The type of key. If the `key_type` is `var`, the `key` is interpreted a variable. If the `key_type` is `var_combination`, the `key` is interpreted as a combination of variables. |
| key | string | True | remote_addr | | The key to count requests by. If the `key_type` is `var`, the `key` is interpreted a variable. The variable does not need to be prefixed by a dollar sign (`$`). If the `key_type` is `var_combination`, the `key` is interpreted as a combination of variables. All variables should be prefixed by dollar signs (`$`). For example, to configure the `key` to use a combination of two request headers `custom-a` and `custom-b`, the `key` should be configured as `$http_custom_a $http_custom_b`. |
| rejected_code | integer | False | 503 | [200,...,599] | The HTTP status code returned when a request is rejected for exceeding the threshold. |
| rejected_msg | string | False | | non-empty | The response body returned when a request is rejected for exceeding the threshold. |
| nodelay | boolean | False | false | | If true, do not delay requests within the burst threshold. |
| allow_degradation | boolean | False | false | | If true, allow APISIX to continue handling requests without the Plugin when the Plugin or its dependencies become unavailable. |
| policy | string | False | local | ["local", "redis", "redis-cluster"] | The policy for rate limiting counter. If it is `local`, the counter is stored in memory locally. If it is `redis`, the counter is stored on a Redis instance. If it is `redis-cluster`, the counter is stored in a Redis cluster. |
| redis_host | string | False | | | The address of the Redis node. Required when `policy` is `redis`. |
| redis_port | integer | False | 6379 | [1,...] | The port of the Redis node when `policy` is `redis`. |
| redis_username | string | False | | | The username for Redis if Redis ACL is used. If you use the legacy authentication method `requirepass`, configure only the `redis_password`. Used when `policy` is `redis`. |
| redis_password | string | False | | | The password of the Redis node when `policy` is `redis` or `redis-cluster`. |
| redis_ssl | boolean | False | false | | If true, use SSL to connect to Redis cluster when `policy` is `redis`. |
| redis_ssl_verify | boolean | False | false | | If true, verify the server SSL certificate when `policy` is `redis`. |
| redis_database | integer | False | 0 | >= 0 | The database number in Redis when `policy` is `redis`. |
| redis_timeout | integer | False | 1000 | [1,...] | The Redis timeout value in milliseconds when `policy` is `redis` or `redis-cluster`. |
| redis_cluster_nodes | array[string] | False | | | The list of the Redis cluster nodes with at least two addresses. Required when policy is redis-cluster. |
| redis_cluster_name | string | False | | | The name of the Redis cluster. Required when `policy` is `redis-cluster`. |
| redis_cluster_ssl | boolean | False | false | | If true, use SSL to connect to Redis cluster when `policy` is |
| redis_cluster_ssl_verify | boolean | False | false | | If true, verify the server SSL certificate when `policy` is `redis-cluster`. |
## Examples
The examples below demonstrate how you can configure `limit-req` in different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
### Apply Rate Limiting by Remote Address
The following example demonstrates the rate limiting of HTTP requests by a single variable, `remote_addr`.
Create a Route with `limit-req` Plugin that allows for 1 QPS per remote address:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '
{
"id": "limit-req-route",
"uri": "/get",
"plugins": {
"limit-req": {
"rate": 1,
"burst": 0,
"key": "remote_addr",
"key_type": "var",
"rejected_code": 429,
"nodelay": true
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to verify:
```shell
curl -i "http://127.0.0.1:9080/get"
```
You should see an `HTTP/1.1 200 OK` response.
The request has consumed all the quota allowed for the time window. If you send the request again within the same second, you should receive an `HTTP/1.1 429 Too Many Requests` response, indicating the request surpasses the quota threshold.
### Implement API Throttling
The following example demonstrates how to configure `burst` to allow overrun of the rate limiting threshold by the configured value and achieve request throttling. You will also see a comparison against when throttling is not implemented.
Create a Route with `limit-req` Plugin that allows for 1 QPS per remote address, with a `burst` of 1 to allow for 1 request exceeding the `rate` to be delayed for processing:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "limit-req-route",
"uri": "/get",
"plugins": {
"limit-req": {
"rate": 1,
"burst": 1,
"key": "remote_addr",
"rejected_code": 429
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Generate three requests to the Route:
```shell
resp=$(seq 3 | xargs -I{} curl -i "http://127.0.0.1:9080/get" -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200 responses: $count_200 ; 429 responses: $count_429"
```
You are likely to see that all three requests are successful:
```text
200 responses: 3 ; 429 responses: 0
```
To see the effect without `burst`, update `burst` to 0 or set `nodelay` to `true` as follows:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/limit-req-route" -X PATCH \
-H "X-API-KEY: ${admin_key}" \
-d '{
"plugins": {
"limit-req": {
"nodelay": true
}
}
}'
```
Generate three requests to the Route again:
```shell
resp=$(seq 3 | xargs -I{} curl -i "http://127.0.0.1:9080/get" -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200 responses: $count_200 ; 429 responses: $count_429"
```
You should see a response similar to the following, showing requests surpassing the rate have been rejected:
```text
200 responses: 1 ; 429 responses: 2
```
### Apply Rate Limiting by Remote Address and Consumer Name
The following example demonstrates the rate limiting of requests by a combination of variables, `remote_addr` and `consumer_name`.
Create a Route with `limit-req` Plugin that allows for 1 QPS per remote address and for each Consumer.
Create a Consumer `john`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "john"
}'
```
Create `key-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/john/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-key-auth",
"plugins": {
"key-auth": {
"key": "john-key"
}
}
}'
```
Create a second Consumer `jane`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jane"
}'
```
Create `key-auth` Credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jane/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jane-key-auth",
"plugins": {
"key-auth": {
"key": "jane-key"
}
}
}'
```
Create a Route with `key-auth` and `limit-req` Plugins, and specify in the `limit-req` Plugin to use a combination of variables as the rate-limiting key:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "limit-req-route",
"uri": "/get",
"plugins": {
"key-auth": {},
"limit-req": {
"rate": 1,
"burst": 0,
"key": "$remote_addr $consumer_name",
"key_type": "var_combination",
"rejected_code": 429
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send two requests simultaneously, each for one Consumer:
```shell
curl -i "http://127.0.0.1:9080/get" -H 'apikey: jane-key' & \
curl -i "http://127.0.0.1:9080/get" -H 'apikey: john-key' &
```
You should receive `HTTP/1.1 200 OK` for both requests, indicating the request has not exceeded the threshold for each Consumer.
If you send more requests as either Consumer within the same second, you should receive an `HTTP/1.1 429 Too Many Requests` response.
This verifies the Plugin rate limits by the combination of variables, `remote_addr` and `consumer_name`.

View File

@@ -0,0 +1,118 @@
---
title: log-rotate
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Log rotate
description: This document contains information about the Apache APISIX log-rotate Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `log-rotate` Plugin is used to keep rotating access and error log files in the log directory at regular intervals.
You can configure how often the logs are rotated and how many logs to keep. When the number of logs exceeds, older logs are automatically deleted.
## Attributes
| Name | Type | Required | Default | Description |
|--------------------|---------|----------|---------|------------------------------------------------------------------------------------------------|
| interval | integer | True | 60 * 60 | Time in seconds specifying how often to rotate the logs. |
| max_kept | integer | True | 24 * 7 | Maximum number of historical logs to keep. If this number is exceeded, older logs are deleted. |
| max_size | integer | False | -1 | Max size(Bytes) of log files to be rotated, size check would be skipped with a value less than 0 or time is up specified by interval. |
| enable_compression | boolean | False | false | When set to `true`, compresses the log file (gzip). Requires `tar` to be installed. |
## Enable Plugin
To enable the Plugin, add it in your configuration file (`conf/config.yaml`):
```yaml title="conf/config.yaml"
plugins:
- log-rotate
plugin_attr:
log-rotate:
interval: 3600 # rotate interval (unit: second)
max_kept: 168 # max number of log files will be kept
max_size: -1 # max size of log files will be kept
enable_compression: false # enable log file compression(gzip) or not, default false
```
## Example usage
Once you enable the Plugin as shown above, the logs will be stored and rotated based on your configuration.
In the example below the `interval` is set to `10` and `max_kept` is set to `10`. This will create logs as shown:
```shell
ll logs
```
```shell
total 44K
-rw-r--r--. 1 resty resty 0 Mar 20 20:32 2020-03-20_20-32-40_access.log
-rw-r--r--. 1 resty resty 2.4K Mar 20 20:32 2020-03-20_20-32-40_error.log
-rw-r--r--. 1 resty resty 0 Mar 20 20:32 2020-03-20_20-32-50_access.log
-rw-r--r--. 1 resty resty 2.8K Mar 20 20:32 2020-03-20_20-32-50_error.log
-rw-r--r--. 1 resty resty 0 Mar 20 20:32 2020-03-20_20-33-00_access.log
-rw-r--r--. 1 resty resty 2.4K Mar 20 20:33 2020-03-20_20-33-00_error.log
-rw-r--r--. 1 resty resty 0 Mar 20 20:33 2020-03-20_20-33-10_access.log
-rw-r--r--. 1 resty resty 2.4K Mar 20 20:33 2020-03-20_20-33-10_error.log
-rw-r--r--. 1 resty resty 0 Mar 20 20:33 2020-03-20_20-33-20_access.log
-rw-r--r--. 1 resty resty 2.4K Mar 20 20:33 2020-03-20_20-33-20_error.log
-rw-r--r--. 1 resty resty 0 Mar 20 20:33 2020-03-20_20-33-30_access.log
-rw-r--r--. 1 resty resty 2.4K Mar 20 20:33 2020-03-20_20-33-30_error.log
-rw-r--r--. 1 resty resty 0 Mar 20 20:33 2020-03-20_20-33-40_access.log
-rw-r--r--. 1 resty resty 2.8K Mar 20 20:33 2020-03-20_20-33-40_error.log
-rw-r--r--. 1 resty resty 0 Mar 20 20:33 2020-03-20_20-33-50_access.log
-rw-r--r--. 1 resty resty 2.4K Mar 20 20:33 2020-03-20_20-33-50_error.log
-rw-r--r--. 1 resty resty 0 Mar 20 20:33 2020-03-20_20-34-00_access.log
-rw-r--r--. 1 resty resty 2.4K Mar 20 20:34 2020-03-20_20-34-00_error.log
-rw-r--r--. 1 resty resty 0 Mar 20 20:34 2020-03-20_20-34-10_access.log
-rw-r--r--. 1 resty resty 2.4K Mar 20 20:34 2020-03-20_20-34-10_error.log
-rw-r--r--. 1 resty resty 0 Mar 20 20:34 access.log
-rw-r--r--. 1 resty resty 1.5K Mar 20 21:31 error.log
```
If you have enabled compression, the logs will be as shown below:
```shell
total 10.5K
-rw-r--r--. 1 resty resty 1.5K Mar 20 20:33 2020-03-20_20-33-50_access.log.tar.gz
-rw-r--r--. 1 resty resty 1.5K Mar 20 20:33 2020-03-20_20-33-50_error.log.tar.gz
-rw-r--r--. 1 resty resty 1.5K Mar 20 20:33 2020-03-20_20-34-00_access.log.tar.gz
-rw-r--r--. 1 resty resty 1.5K Mar 20 20:34 2020-03-20_20-34-00_error.log.tar.gz
-rw-r--r--. 1 resty resty 1.5K Mar 20 20:34 2020-03-20_20-34-10_access.log.tar.gz
-rw-r--r--. 1 resty resty 1.5K Mar 20 20:34 2020-03-20_20-34-10_error.log.tar.gz
-rw-r--r--. 1 resty resty 0 Mar 20 20:34 access.log
-rw-r--r--. 1 resty resty 1.5K Mar 20 21:31 error.log
```
## Delete Plugin
To remove the `log-rotate` Plugin, you can remove it from your configuration file (`conf/config.yaml`):
```yaml title="conf/config.yaml"
plugins:
# - log-rotate
```

View File

@@ -0,0 +1,183 @@
---
title: loggly
keywords:
- Apache APISIX
- API Gateway
- Plugin
- SolarWinds Loggly
description: This document contains information about the Apache APISIX loggly Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `loggly` Plugin is used to forward logs to [SolarWinds Loggly](https://www.solarwinds.com/loggly) for analysis and storage.
When the Plugin is enabled, APISIX will serialize the request context information to [Loggly Syslog](https://documentation.solarwinds.com/en/success_center/loggly/content/admin/streaming-syslog-without-using-files.htm?cshid=loggly_streaming-syslog-without-using-files) data format which is Syslog events with [RFC5424](https://datatracker.ietf.org/doc/html/rfc5424) compliant headers.
When the maximum batch size is exceeded, the data in the queue is pushed to Loggly enterprise syslog endpoint. See [batch processor](../batch-processor.md) for more details.
## Attributes
| Name | Type | Required | Default | Description |
|------------------------|---------------|----------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| customer_token | string | True | | Unique identifier used when sending logs to Loggly to ensure that they are sent to the right organisation account. |
| severity | string (enum) | False | INFO | Syslog log event severity level. Choose between: `DEBUG`, `INFO`, `NOTICE`, `WARNING`, `ERR`, `CRIT`, `ALERT`, and `EMEGR`. |
| severity_map | object | False | nil | A way to map upstream HTTP response codes to Syslog severity. Key-value pairs where keys are the HTTP response codes and the values are the Syslog severity levels. For example `{"410": "CRIT"}`. |
| tags | array | False | | Metadata to be included with any event log to aid in segmentation and filtering. |
| log_format | object | False | {"host": "$host", "@timestamp": "$time_iso8601", "client_ip": "$remote_addr"} | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
| include_req_body | boolean | False | false | When set to `true` includes the request body in the log. If the request body is too big to be kept in the memory, it can't be logged due to Nginx's limitations. |
| include_req_body_expr | array | False | | Filter for when the `include_req_body` attribute is set to `true`. Request body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
| include_resp_body | boolean | False | false | When set to `true` includes the response body in the log. |
| include_resp_body_expr | array | False | | When the `include_resp_body` attribute is set to `true`, use this to filter based on [lua-resty-expr](https://github.com/api7/lua-resty-expr). If present, only logs the response if the expression evaluates to `true`. |
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
To generate a Customer token, go to `<your assigned subdomain>/loggly.com/tokens` or navigate to Logs > Source setup > Customer tokens.
### Example of default log format
```text
<10>1 2024-01-06T06:50:51.739Z 127.0.0.1 apisix 58525 - [token-1@41058 tag="apisix"] {"service_id":"","server":{"version":"3.7.0","hostname":"localhost"},"apisix_latency":100.99985313416,"request":{"url":"http://127.0.0.1:1984/opentracing","headers":{"content-type":"application/x-www-form-urlencoded","user-agent":"lua-resty-http/0.16.1 (Lua) ngx_lua/10025","host":"127.0.0.1:1984"},"querystring":{},"uri":"/opentracing","size":155,"method":"GET"},"response":{"headers":{"content-type":"text/plain","server":"APISIX/3.7.0","transfer-encoding":"chunked","connection":"close"},"size":141,"status":200},"route_id":"1","latency":103.99985313416,"upstream_latency":3,"client_ip":"127.0.0.1","upstream":"127.0.0.1:1982","start_time":1704523851634}
```
## Metadata
You can also configure the Plugin through Plugin metadata. The following configurations are available:
| Name | Type | Required | Default | Valid values | Description |
|------------|---------|----------|----------------------|--------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| host | string | False | "logs-01.loggly.com" | | Endpoint of the host where the logs are being sent. |
| port | integer | False | 514 | | Loggly port to connect to. Only used for `syslog` protocol. |
| timeout | integer | False | 5000 | | Loggly send data request timeout in milliseconds. |
| protocol | string | False | "syslog" | [ "syslog" , "http", "https" ] | Protocol in which the logs are sent to Loggly. |
| log_format | object | False | nil | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
We support [Syslog](https://documentation.solarwinds.com/en/success_center/loggly/content/admin/streaming-syslog-without-using-files.htm), [HTTP/S](https://documentation.solarwinds.com/en/success_center/loggly/content/admin/http-bulk-endpoint.htm) (bulk endpoint) protocols to send log events to Loggly. By default, in APISIX side, the protocol is set to "syslog". It lets you send RFC5424 compliant syslog events with some fine-grained control (log severity mapping based on upstream HTTP response code). But HTTP/S bulk endpoint is great to send larger batches of log events with faster transmission speed. If you wish to update it, just update the metadata.
:::note
APISIX supports [Syslog](https://documentation.solarwinds.com/en/success_center/loggly/content/admin/streaming-syslog-without-using-files.htm) and [HTTP/S](https://documentation.solarwinds.com/en/success_center/loggly/content/admin/http-bulk-endpoint.htm) protocols to send data to Loggly. Syslog lets you send RFC5424 compliant syslog events with fine-grained control. But, HTTP/S bulk endpoint is better while sending large batches of logs at a fast transmission speed. You can configure the metadata to update the protocol as shown below:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/loggly -H "X-API-KEY: $admin_key" -X PUT -d '
{
"protocol": "http"
}'
```
:::
## Enable Plugin
### Full configuration
The example below shows a complete configuration of the Plugin on a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins":{
"loggly":{
"customer_token":"0e6fe4bf-376e-40f4-b25f-1d55cb29f5a2",
"tags":["apisix", "testroute"],
"severity":"info",
"severity_map":{
"503": "err",
"410": "alert"
},
"buffer_duration":60,
"max_retry_count":0,
"retry_delay":1,
"inactive_timeout":2,
"batch_max_size":10
}
},
"upstream":{
"type":"roundrobin",
"nodes":{
"127.0.0.1:80":1
}
},
"uri":"/index.html"
}'
```
### Minimal configuration
The example below shows a bare minimum configuration of the Plugin on a Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins":{
"loggly":{
"customer_token":"0e6fe4bf-376e-40f4-b25f-1d55cb29f5a2",
}
},
"upstream":{
"type":"roundrobin",
"nodes":{
"127.0.0.1:80":1
}
},
"uri":"/index.html"
}'
```
## Example usage
Now, if you make a request to APISIX, it will be logged in Loggly:
```shell
curl -i http://127.0.0.1:9080/index.html
```
You can then view the logs on your Loggly Dashboard:
![Loggly Dashboard](../../../assets/images/plugin/loggly-dashboard.png)
## Delete Plugin
To remove the `file-logger` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:80": 1
}
}
}'
```

View File

@@ -0,0 +1,403 @@
---
title: loki-logger
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Loki-logger
- Grafana Loki
description: The loki-logger Plugin pushes request and response logs in batches to Grafana Loki, via the Loki HTTP API /loki/api/v1/push. The Plugin also supports the customization of log formats.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/loki-logger" />
</head>
## Description
The `loki-logger` Plugin pushes request and response logs in batches to [Grafana Loki](https://grafana.com/oss/loki/), via the [Loki HTTP API](https://grafana.com/docs/loki/latest/reference/loki-http-api/#loki-http-api) `/loki/api/v1/push`. The Plugin also supports the customization of log formats.
When enabled, the Plugin will serialize the request context information to [JSON objects](https://grafana.com/docs/loki/latest/api/#push-log-entries-to-loki) and add them to the queue, before they are pushed to Loki. See [batch processor](../batch-processor.md) for more details.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|---|---|---|---|---|---|
| endpoint_addrs | array[string] | True | | | Loki API base URLs, such as `http://127.0.0.1:3100`. If multiple endpoints are configured, the log will be pushed to a randomly determined endpoint from the list. |
| endpoint_uri | string | False | /loki/api/v1/push | | URI path to the Loki ingest endpoint. |
| tenant_id | string | False | fake | | Loki tenant ID. According to Loki's [multi-tenancy documentation](https://grafana.com/docs/loki/latest/operations/multi-tenancy/#multi-tenancy), the default value is set to `fake` under single-tenancy. |
| headers | object | False | | | Key-value pairs of request headers (settings for `X-Scope-OrgID` and `Content-Type` will be ignored). |
| log_labels | object | False | {job = "apisix"} | | Loki log label. Support [NGINX variables](https://nginx.org/en/docs/varindex.html) and constant strings in values. Variables should be prefixed with a `$` sign. For example, the label can be `{"origin" = "apisix"}` or `{"origin" = "$remote_addr"}`. |
| ssl_verify | boolean | False | true | | If true, verify Loki's SSL certificates. |
| timeout | integer | False | 3000 | [1, 60000] | Timeout for the Loki service HTTP call in milliseconds. |
| keepalive | boolean | False | true | | If true, keep the connection alive for multiple requests. |
| keepalive_timeout | integer | False | 60000 | >=1000 | Keepalive timeout in milliseconds. |
| keepalive_pool | integer | False | 5 | >=1 | Maximum number of connections in the connection pool. |
| log_format | object | False | | | Custom log format in key-value pairs in JSON format. Support [APISIX variables](../apisix-variable.md) and [NGINX variables](http://nginx.org/en/docs/varindex.html) in values. |
| name | string | False | loki-logger | | Unique identifier of the Plugin for the batch processor. If you use [Prometheus](./prometheus.md) to monitor APISIX metrics, the name is exported in `apisix_batch_process_entries`. |
| include_req_body | boolean | False | false | | If true, include the request body in the log. Note that if the request body is too big to be kept in the memory, it can not be logged due to NGINX's limitations. |
| include_req_body_expr | array[array] | False | | | An array of one or more conditions in the form of [lua-resty-expr](https://github.com/api7/lua-resty-expr). Used when the `include_req_body` is true. Request body would only be logged when the expressions configured here evaluate to true. |
| include_resp_body | boolean | False | false | | If true, include the response body in the log. |
| include_resp_body_expr | array[array] | False | | | An array of one or more conditions in the form of [lua-resty-expr](https://github.com/api7/lua-resty-expr). Used when the `include_resp_body` is true. Response body would only be logged when the expressions configured here evaluate to true. |
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
## Plugin Metadata
You can also configure log format on a global scale using the [Plugin Metadata](../terminology/plugin-metadata.md), which configures the log format for all `loki-logger` Plugin instances. If the log format configured on the individual Plugin instance differs from the log format configured on Plugin metadata, the log format configured on the individual Plugin instance takes precedence.
| Name | Type | Required | Default | Description |
|------|------|----------|---------|-------------|
| log_format | object | False | | Custom log format in key-value pairs in JSON format. Support [APISIX variables](../apisix-variable.md) and [NGINX variables](http://nginx.org/en/docs/varindex.html) in values. |
## Examples
The examples below demonstrate how you can configure `loki-logger` Plugin for different scenarios.
To follow along the examples, start a sample Loki instance in Docker:
```shell
wget https://raw.githubusercontent.com/grafana/loki/v3.0.0/cmd/loki/loki-local-config.yaml -O loki-config.yaml
docker run --name loki -d -v $(pwd):/mnt/config -p 3100:3100 grafana/loki:3.2.1 -config.file=/mnt/config/loki-config.yaml
```
Additionally, start a Grafana instance to view and visualize the logs:
```shell
docker run -d --name=apisix-quickstart-grafana \
-p 3000:3000 \
grafana/grafana-oss
```
To connect Loki and Grafana, visit Grafana at [`http://localhost:3000`](http://localhost:3000). Under __Connections > Data sources__, add a new data source and select Loki. Your connection URL should follow the format of `http://{your_ip_address}:3100`. When saving the new data source, Grafana should also test the connection, and you are expected to see Grafana notifying the data source is successfully connected.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Log Requests and Responses in Default Log Format
The following example demonstrates how you can configure the `loki-logger` Plugin on a Route to log requests and responses going through the route.
Create a Route with the `loki-logger` Plugin and configure the address of Loki:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "loki-logger-route",
"uri": "/anything",
"plugins": {
"loki-logger": {
"endpoint_addrs": ["http://192.168.1.5:3100"]
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Send a few requests to the Route to generate log entries:
```shell
curl "http://127.0.0.1:9080/anything"
```
You should receive `HTTP/1.1 200 OK` responses for all requests.
Navigate to the [Grafana explore view](http://localhost:3000/explore) and run a query `job = apisix`. You should see a number of logs corresponding to your requests, such as the following:
```json
{
"route_id": "loki-logger-route",
"response": {
"status": 200,
"headers": {
"date": "Fri, 03 Jan 2025 03:54:26 GMT",
"server": "APISIX/3.11.0",
"access-control-allow-credentials": "true",
"content-length": "391",
"access-control-allow-origin": "*",
"content-type": "application/json",
"connection": "close"
},
"size": 619
},
"start_time": 1735876466,
"client_ip": "192.168.65.1",
"service_id": "",
"apisix_latency": 5.0000038146973,
"upstream": "34.197.122.172:80",
"upstream_latency": 666,
"server": {
"hostname": "0b9a772e68f8",
"version": "3.11.0"
},
"request": {
"headers": {
"user-agent": "curl/8.6.0",
"accept": "*/*",
"host": "127.0.0.1:9080"
},
"size": 85,
"method": "GET",
"url": "http://127.0.0.1:9080/anything",
"querystring": {},
"uri": "/anything"
},
"latency": 671.0000038147
}
```
This verifies that Loki has been receiving logs from APISIX. You may also create dashboards in Grafana to further visualize and analyze the logs.
### Customize Log Format with Plugin Metadata
The following example demonstrates how you can customize log format using [Plugin Metadata](../terminology/plugin-metadata.md).
Create a Route with the `loki-logger` plugin:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "loki-logger-route",
"uri": "/anything",
"plugins": {
"loki-logger": {
"endpoint_addrs": ["http://192.168.1.5:3100"]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
```
Configure Plugin metadata for `loki-logger`, which will update the log format for all routes of which requests would be logged:
```shell
curl "http://127.0.0.1:9180/apisix/admin/plugin_metadata/loki-logger" -X PUT \
-H 'X-API-KEY: ${admin_key}' \
-d '{
"log_format": {
"host": "$host",
"client_ip": "$remote_addr",
"route_id": "$route_id",
"@timestamp": "$time_iso8601"
}
}'
```
Send a request to the Route to generate a new log entry:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should receive an `HTTP/1.1 200 OK` response.
Navigate to the [Grafana explore view](http://localhost:3000/explore) and run a query `job = apisix`. You should see a log entry corresponding to your request, similar to the following:
```json
{
"@timestamp":"2025-01-03T21:11:34+00:00",
"client_ip":"192.168.65.1",
"route_id":"loki-logger-route",
"host":"127.0.0.1"
}
```
If the Plugin on a Route specifies a specific log format, it will take precedence over the log format specified in the Plugin metadata. For instance, update the Plugin on the previous Route as such:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/loki-logger-route" -X PATCH \
-H "X-API-KEY: ${admin_key}" \
-d '{
"plugins": {
"loki-logger": {
"log_format": {
"route_id": "$route_id",
"client_ip": "$remote_addr",
"@timestamp": "$time_iso8601"
}
}
}
}'
```
Send a request to the Route to generate a new log entry:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should receive an `HTTP/1.1 200 OK` response.
Navigate to the [Grafana explore view](http://localhost:3000/explore) and re-run the query `job = apisix`. You should see a log entry corresponding to your request, consistent with the format configured on the route, similar to the following:
```json
{
"client_ip":"192.168.65.1",
"route_id":"loki-logger-route",
"@timestamp":"2025-01-03T21:19:45+00:00"
}
```
### Log Request Bodies Conditionally
The following example demonstrates how you can conditionally log request body.
Create a Route with `loki-logger` to only log request body if the URL query string `log_body` is `yes`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "loki-logger-route",
"uri": "/anything",
"plugins": {
"loki-logger": {
"endpoint_addrs": ["http://192.168.1.5:3100"],
"include_req_body": true,
"include_req_body_expr": [["arg_log_body", "==", "yes"]]
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Send a request to the Route with a URL query string satisfying the condition:
```shell
curl -i "http://127.0.0.1:9080/anything?log_body=yes" -X POST -d '{"env": "dev"}'
```
Navigate to the [Grafana explore view](http://localhost:3000/explore) and run the query `job = apisix`. You should see a log entry corresponding to your request, where the request body is logged:
```json
{
"route_id": "loki-logger-route",
...,
"request": {
"headers": {
...
},
"body": "{\"env\": \"dev\"}",
"size": 182,
"method": "POST",
"url": "http://127.0.0.1:9080/anything?log_body=yes",
"querystring": {
"log_body": "yes"
},
"uri": "/anything?log_body=yes"
},
"latency": 809.99994277954
}
```
Send a request to the Route without any URL query string:
```shell
curl -i "http://127.0.0.1:9080/anything" -X POST -d '{"env": "dev"}'
```
Navigate to the [Grafana explore view](http://localhost:3000/explore) and run the query `job = apisix`. You should see a log entry corresponding to your request, where the request body is not logged:
```json
{
"route_id": "loki-logger-route",
...,
"request": {
"headers": {
...
},
"size": 169,
"method": "POST",
"url": "http://127.0.0.1:9080/anything",
"querystring": {},
"uri": "/anything"
},
"latency": 557.00016021729
}
```
:::info
If you have customized the `log_format` in addition to setting `include_req_body` or `include_resp_body` to `true`, the Plugin would not include the bodies in the logs.
As a workaround, you may be able to use the NGINX variable `$request_body` in the log format, such as:
```json
{
"kafka-logger": {
...,
"log_format": {"body": "$request_body"}
}
}
```
:::
## FAQ
### Logs are not pushed properly
Look at `error.log` for such a log.
```text
2023/04/30 13:45:46 [error] 19381#19381: *1075673 [lua] batch-processor.lua:95: Batch Processor[loki logger] failed to process entries: loki server returned status: 401, body: no org id, context: ngx.timer, client: 127.0.0.1, server: 0.0.0.0:9081
```
The error can be diagnosed based on the error code in the `failed to process entries: loki server returned status: 401, body: no org id` and the response body of the loki server.
### Getting errors when RPS is high?
- Make sure to `keepalive` related configuration is set properly. See [Attributes](#attributes) for more information.
- Check the logs in `error.log`, look for such a log.
```text
2023/04/30 13:49:34 [error] 19381#19381: *1082680 [lua] batch-processor.lua:95: Batch Processor[loki logger] failed to process entries: loki server returned status: 429, body: Ingestion rate limit exceeded for user tenant_1 (limit: 4194304 bytes/sec) while attempting to ingest '1000' lines totaling '616307' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased, context: ngx.timer, client: 127.0.0.1, server: 0.0.0.0:9081
```
- The logs usually associated with high QPS look like the above. The error is: `Ingestion rate limit exceeded for user tenant_1 (limit: 4194304 bytes/sec) while attempting to ingest '1000' lines totaling '616307' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased`.
- Refer to [Loki documentation](https://grafana.com/docs/loki/latest/configuration/#limits_config) to add limits on the amount of default and burst logs, such as `ingestion_rate_mb` and `ingestion_burst_size_mb`.
As the test during development, setting the `ingestion_burst_size_mb` to 100 allows APISIX to push the logs correctly at least at 10000 RPS.

View File

@@ -0,0 +1,250 @@
---
title: mocking
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Mocking
description: This document contains information about the Apache APISIX mocking Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `mocking` Plugin is used for mocking an API. When executed, it returns random mock data in the format specified and the request is not forwarded to the Upstream.
## Attributes
| Name | Type | Required | Default | Description |
|------------------|---------|----------|------------------|----------------------------------------------------------------------------------------|
| delay | integer | False | | Response delay in seconds. |
| response_status | integer | False | 200 | HTTP status code of the response. |
| content_type | string | False | application/json | Header `Content-Type` of the response. |
| response_example | string | False | | Body of the response, support use variables, like `$remote_addr $consumer_name`. |
| response_schema | object | False | | The JSON schema object for the response. Works when `response_example` is unspecified. |
| with_mock_header | boolean | False | true | When set to `true`, adds a response header `x-mock-by: APISIX/{version}`. |
| response_headers | object | false | | Headers to be added in the mocked response. Example: `{"X-Foo": "bar", "X-Few": "baz"}`|
The JSON schema supports the following types in their fields:
- `string`
- `number`
- `integer`
- `boolean`
- `object`
- `array`
Here is a JSON schema example:
```json
{
"properties":{
"field0":{
"example":"abcd",
"type":"string"
},
"field1":{
"example":123.12,
"type":"number"
},
"field3":{
"properties":{
"field3_1":{
"type":"string"
},
"field3_2":{
"properties":{
"field3_2_1":{
"example":true,
"type":"boolean"
},
"field3_2_2":{
"items":{
"example":155.55,
"type":"integer"
},
"type":"array"
}
},
"type":"object"
}
},
"type":"object"
},
"field2":{
"items":{
"type":"string"
},
"type":"array"
}
},
"type":"object"
}
```
This is the response generated by the Plugin from this JSON schema:
```json
{
"field1": 123.12,
"field3": {
"field3_1": "LCFE0",
"field3_2": {
"field3_2_1": true,
"field3_2_2": [
155,
155
]
}
},
"field0": "abcd",
"field2": [
"sC"
]
}
```
## Enable Plugin
The example below configures the `mocking` Plugin for a specific Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/index.html",
"plugins": {
"mocking": {
"delay": 1,
"content_type": "application/json",
"response_status": 200,
"response_schema": {
"properties":{
"field0":{
"example":"abcd",
"type":"string"
},
"field1":{
"example":123.12,
"type":"number"
},
"field3":{
"properties":{
"field3_1":{
"type":"string"
},
"field3_2":{
"properties":{
"field3_2_1":{
"example":true,
"type":"boolean"
},
"field3_2_2":{
"items":{
"example":155.55,
"type":"integer"
},
"type":"array"
}
},
"type":"object"
}
},
"type":"object"
},
"field2":{
"items":{
"type":"string"
},
"type":"array"
}
},
"type":"object"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```
## Example usage
Once you have configured the Plugin as mentioned above, you can test the Route.
The example used here uses this mocked response:
```json
{
"delay":0,
"content_type":"",
"with_mock_header":true,
"response_status":201,
"response_example":"{\"a\":1,\"b\":2}"
}
```
Now to test the Route:
```shell
curl http://127.0.0.1:9080/test-mock -i
```
```
HTTP/1.1 201 Created
...
Content-Type: application/json;charset=utf8
x-mock-by: APISIX/2.10.0
...
{"a":1,"b":2}
```
## Delete Plugin
To remove the `mocking` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,169 @@
---
title: mqtt-proxy
keywords:
- Apache APISIX
- API Gateway
- Plugin
- MQTT Proxy
description: This document contains information about the Apache APISIX mqtt-proxy Plugin. The `mqtt-proxy` Plugin is used for dynamic load balancing with `client_id` of MQTT.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `mqtt-proxy` Plugin is used for dynamic load balancing with `client_id` of MQTT. It only works in stream model.
This Plugin supports both the protocols [3.1.*](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html) and [5.0](https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html).
## Attributes
| Name | Type | Required | Description |
|----------------|---------|------------|-----------------------------------------------------------------------------------|
| protocol_name | string | True | Name of the protocol. Generally `MQTT`. |
| protocol_level | integer | True | Level of the protocol. It should be `4` for MQTT `3.1.*` and `5` for MQTT `5.0`. |
## Enable Plugin
To enable the Plugin, you need to first enable the `stream_proxy` configuration in your configuration file (`conf/config.yaml`). The below configuration represents listening on the `9100` TCP port:
```yaml title="conf/config.yaml"
...
router:
http: 'radixtree_uri'
ssl: 'radixtree_sni'
stream_proxy: # TCP/UDP proxy
tcp: # TCP proxy port list
- 9100
dns_resolver:
...
```
You can now send the MQTT request to port `9100`.
You can now create a stream Route and enable the `mqtt-proxy` Plugin:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/stream_routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"mqtt-proxy": {
"protocol_name": "MQTT",
"protocol_level": 4
}
},
"upstream": {
"type": "roundrobin",
"nodes": [{
"host": "127.0.0.1",
"port": 1980,
"weight": 1
}]
}
}'
```
:::note
If you are using Docker in macOS, then `host.docker.internal` is the right parameter for the `host` attribute.
:::
This Plugin exposes a variable `mqtt_client_id` which can be used for load balancing as shown below:
```shell
curl http://127.0.0.1:9180/apisix/admin/stream_routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"mqtt-proxy": {
"protocol_name": "MQTT",
"protocol_level": 4
}
},
"upstream": {
"type": "chash",
"key": "mqtt_client_id",
"nodes": [
{
"host": "127.0.0.1",
"port": 1995,
"weight": 1
},
{
"host": "127.0.0.2",
"port": 1995,
"weight": 1
}
]
}
}'
```
MQTT connections with different client ID will be forwarded to different nodes based on the consistent hash algorithm. If client ID is missing, client IP is used instead for load balancing.
## Enabling mTLS with mqtt-proxy plugin
Stream proxies use TCP connections and can accept TLS. Follow the guide about [how to accept tls over tcp connections](../stream-proxy.md/#accept-tls-over-tcp-connection) to open a stream proxy with enabled TLS.
The `mqtt-proxy` plugin is enabled through TCP communications on the specified port for the stream proxy, and will also require clients to authenticate via TLS if `tls` is set to `true`.
Configure `ssl` providing the CA certificate and the server certificate, together with a list of SNIs. Steps to protect `stream_routes` with `ssl` are equivalent to the ones to [protect Routes](../mtls.md/#protect-route).
### Create a stream_route using mqtt-proxy plugin and mTLS
Here is an example of how create a stream_route which is using the `mqtt-proxy` plugin, providing the CA certificate, the client certificate and the client key (for self-signed certificates which are not trusted by your host, use the `-k` flag):
```shell
curl 127.0.0.1:9180/apisix/admin/stream_routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"mqtt-proxy": {
"protocol_name": "MQTT",
"protocol_level": 4
}
},
"sni": "${your_sni_name}",
"upstream": {
"nodes": {
"127.0.0.1:1980": 1
},
"type": "roundrobin"
}
}'
```
The `sni` name must match one or more of the SNIs provided to the SSL object that you created with the CA and server certificates.
## Delete Plugin
To remove the `mqtt-proxy` Plugin you can remove the corresponding configuration as shown below:
```shell
curl http://127.0.0.1:9180/apisix/admin/stream_routes/1 -H "X-API-KEY: $admin_key" -X DELETE
```

View File

@@ -0,0 +1,164 @@
---
title: multi-auth
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Multi Auth
- multi-auth
description: This document contains information about the Apache APISIX multi-auth Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `multi-auth` Plugin is used to add multiple authentication methods to a Route or a Service. It supports plugins of type 'auth'. You can combine different authentication methods using `multi-auth` plugin.
This plugin provides a flexible authentication mechanism by iterating through the list of authentication plugins specified in the `auth_plugins` attribute. It allows multiple consumers to share the same route while using different authentication methods. For example, one consumer can authenticate using basic authentication, while another consumer can authenticate using JWT.
## Attributes
For Route:
| Name | Type | Required | Default | Description |
|--------------|-------|----------|---------|-----------------------------------------------------------------------|
| auth_plugins | array | True | - | Add supporting auth plugins configuration. expects at least 2 plugins |
## Enable Plugin
To enable the Plugin, you have to create two or more Consumer objects with different authentication configurations:
First create a Consumer using basic authentication:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/consumers -H "X-API-KEY: $admin_key" -X PUT -d '
{
"username": "foo1",
"plugins": {
"basic-auth": {
"username": "foo1",
"password": "bar1"
}
}
}'
```
Then create a Consumer using key authentication:
```shell
curl http://127.0.0.1:9180/apisix/admin/consumers -H "X-API-KEY: $admin_key" -X PUT -d '
{
"username": "foo2",
"plugins": {
"key-auth": {
"key": "auth-one"
}
}
}'
```
Once you have created Consumer objects, you can then configure a Route or a Service to authenticate requests:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {
"multi-auth":{
"auth_plugins":[
{
"basic-auth":{ }
},
{
"key-auth":{
"query":"apikey",
"hide_credentials":true,
"header":"apikey"
}
}
]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```
## Example usage
After you have configured the Plugin as mentioned above, you can make a request to the Route as shown below:
Send a request with `basic-auth` credentials:
```shell
curl -i -ufoo1:bar1 http://127.0.0.1:9080/hello
```
Send a request with `key-auth` credentials:
```shell
curl http://127.0.0.1:9080/hello -H 'apikey: auth-one' -i
```
```
HTTP/1.1 200 OK
...
hello, world
```
If the request is not authorized, an `401 Unauthorized` error will be thrown:
```json
{"message":"Authorization Failed"}
```
## Delete Plugin
To remove the `multi-auth` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,128 @@
---
title: node-status
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Node status
description: This document contains information about the Apache APISIX node-status Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `node-status` Plugin can be used get the status of requests to APISIX by exposing an API endpoint.
## Attributes
None.
## API
This Plugin will add the endpoint `/apisix/status` to expose the status of APISIX.
You may need to use the [public-api](public-api.md) Plugin to expose the endpoint.
## Enable Plugin
To configure the `node-status` Plugin, you have to first enable it in your configuration file (`conf/config.yaml`):
```yaml title="conf/config.yaml"
plugins:
- example-plugin
- limit-req
- jwt-auth
- zipkin
- node-status
......
```
You have to the setup the Route for the status API and expose it using the [public-api](public-api.md) Plugin.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/ns -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/apisix/status",
"plugins": {
"public-api": {}
}
}'
```
## Example usage
Once you have configured the Plugin, you can make a request to the `apisix/status` endpoint to get the status:
```shell
curl http://127.0.0.1:9080/apisix/status -i
```
```shell
HTTP/1.1 200 OK
Date: Tue, 03 Nov 2020 11:12:55 GMT
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: APISIX web server
{"status":{"total":"23","waiting":"0","accepted":"22","writing":"1","handled":"22","active":"1","reading":"0"},"id":"6790a064-8f61-44ba-a6d3-5df42f2b1bb3"}
```
The parameters in the response are described below:
| Parameter | Description |
|-----------|------------------------------------------------------------------------------------------------------------------------|
| status | Status of APISIX. |
| total | Total number of client requests. |
| waiting | Number of idle client connections waiting for a request. |
| accepted | Number of accepted client connections. |
| writing | Number of connections to which APISIX is writing back a response. |
| handled | Number of handled connections. Generally, this value is the same as `accepted` unless any a resource limit is reached. |
| active | Number of active client connections including `waiting` connections. |
| reading | Number of connections where APISIX is reading the request header. |
| id | UID of APISIX instance saved in `apisix/conf/apisix.uid`. |
## Delete Plugin
To remove the Plugin, you can remove it from your configuration file (`conf/config.yaml`):
```yaml title="conf/config.yaml"
plugins:
- example-plugin
- limit-req
- jwt-auth
- zipkin
......
```
You can also remove the Route on `/apisix/status`:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/ns -H "X-API-KEY: $admin_key" -X DELETE
```

View File

@@ -0,0 +1,142 @@
---
title: ocsp-stapling
keywords:
- Apache APISIX
- Plugin
- ocsp-stapling
description: This document contains information about the Apache APISIX ocsp-stapling Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `ocsp-stapling` Plugin dynamically sets the behavior of [OCSP stapling](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_stapling) in Nginx.
## Enable Plugin
This Plugin is disabled by default. Modify the config file to enable the plugin:
```yaml title="./conf/config.yaml"
plugins:
- ...
- ocsp-stapling
```
After modifying the config file, reload APISIX or send an hot-loaded HTTP request through the Admin API to take effect:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugins/reload -H "X-API-KEY: $admin_key" -X PUT
```
## Attributes
The attributes of this plugin are stored in specific field `ocsp_stapling` within SSL Resource.
| Name | Type | Required | Default | Valid values | Description |
|----------------|----------------------|----------|---------------|--------------|-----------------------------------------------------------------------------------------------|
| enabled | boolean | False | false | | Like the `ssl_stapling` directive, enables or disables OCSP stapling feature. |
| skip_verify | boolean | False | false | | Like the `ssl_stapling_verify` directive, enables or disables verification of OCSP responses. |
| cache_ttl | integer | False | 3600 | >= 60 | Specifies the expired time of OCSP response cache. |
## Example usage
You should create an SSL Resource first, and the certificate of the server certificate issuer should be known. Normally the fullchain certificate works fine.
Create an SSL Resource as such:
```shell
curl http://127.0.0.1:9180/apisix/admin/ssls/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"cert" : "'"$(cat server.crt)"'",
"key": "'"$(cat server.key)"'",
"snis": ["test.com"],
"ocsp_stapling": {
"enabled": true
}
}'
```
Next, establish a secure connection to the server, request the SSL/TLS session status, and display the output from the server:
```shell
echo -n "Q" | openssl s_client -status -connect localhost:9443 -servername test.com 2>&1 | cat
```
```
...
CONNECTED(00000003)
OCSP response:
======================================
OCSP Response Data:
OCSP Response Status: successful (0x0)
...
```
To disable OCSP stapling feature, you can make a request as shown below:
```shell
curl http://127.0.0.1:9180/apisix/admin/ssls/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"cert" : "'"$(cat server.crt)"'",
"key": "'"$(cat server.key)"'",
"snis": ["test.com"],
"ocsp_stapling": {
"enabled": false
}
}'
```
## Delete Plugin
Make sure all your SSL Resource doesn't contains `ocsp_stapling` field anymore. To remove this field, you can make a request as shown below:
```shell
curl http://127.0.0.1:9180/apisix/admin/ssls/1 \
-H "X-API-KEY: $admin_key" -X PATCH -d '
{
"ocsp_stapling": null
}'
```
Modify the config file `./conf/config.yaml` to disable the plugin:
```yaml title="./conf/config.yaml"
plugins:
- ...
# - ocsp-stapling
```
After modifying the config file, reload APISIX or send an hot-loaded HTTP request through the Admin API to take effect:
```shell
curl http://127.0.0.1:9180/apisix/admin/plugins/reload -H "X-API-KEY: $admin_key" -X PUT
```

View File

@@ -0,0 +1,327 @@
---
title: opa
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Open Policy Agent
- opa
description: This document contains information about the Apache APISIX opa Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `opa` Plugin can be used to integrate with [Open Policy Agent (OPA)](https://www.openpolicyagent.org). OPA is a policy engine that helps defininig and enforcing authorization policies, which determines whether a user or application has the necessary permissions to perform a particular action or access a particular resource. Using OPA with APISIX decouples authorization logics from APISIX.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|-------------------|---------|----------|---------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| host | string | True | | | Host address of the OPA service. For example, `https://localhost:8181`. |
| ssl_verify | boolean | False | true | | When set to `true` verifies the SSL certificates. |
| policy | string | True | | | OPA policy path. A combination of `package` and `decision`. While using advanced features like custom response, you can omit `decision`. |
| timeout | integer | False | 3000ms | [1, 60000]ms | Timeout for the HTTP call. |
| keepalive | boolean | False | true | | When set to `true`, keeps the connection alive for multiple requests. |
| keepalive_timeout | integer | False | 60000ms | [1000, ...]ms | Idle time after which the connection is closed. |
| keepalive_pool | integer | False | 5 | [1, ...]ms | Connection pool limit. |
| with_route | boolean | False | false | | When set to true, sends information about the current Route. |
| with_service | boolean | False | false | | When set to true, sends information about the current Service. |
| with_consumer | boolean | False | false | | When set to true, sends information about the current Consumer. Note that this may send sensitive information like the API key. Make sure to turn it on only when you are sure it is safe. |
## Data definition
### APISIX to OPA service
The JSON below shows the data sent to the OPA service by APISIX:
```json
{
"type": "http",
"request": {
"scheme": "http",
"path": "\/get",
"headers": {
"user-agent": "curl\/7.68.0",
"accept": "*\/*",
"host": "127.0.0.1:9080"
},
"query": {},
"port": 9080,
"method": "GET",
"host": "127.0.0.1"
},
"var": {
"timestamp": 1701234567,
"server_addr": "127.0.0.1",
"server_port": "9080",
"remote_port": "port",
"remote_addr": "ip address"
},
"route": {},
"service": {},
"consumer": {}
}
```
Each of these keys are explained below:
- `type` indicates the request type (`http` or `stream`).
- `request` is used when the `type` is `http` and contains the basic request information (URL, headers etc).
- `var` contains the basic information about the requested connection (IP, port, request timestamp etc).
- `route`, `service` and `consumer` contains the same data as stored in APISIX and are only sent if the `opa` Plugin is configured on these objects.
### OPA service to APISIX
The JSON below shows the response from the OPA service to APISIX:
```json
{
"result": {
"allow": true,
"reason": "test",
"headers": {
"an": "header"
},
"status_code": 401
}
}
```
The keys in the response are explained below:
- `allow` is indispensable and indicates whether the request is allowed to be forwarded through APISIX.
- `reason`, `headers`, and `status_code` are optional and are only returned when you configure a custom response. See the next section use cases for this.
## Example usage
First, you need to launch the Open Policy Agent environment:
```shell
docker run -d --name opa -p 8181:8181 openpolicyagent/opa:0.35.0 run -s
```
### Basic usage
Once you have the OPA service running, you can create a basic policy:
```shell
curl -X PUT '127.0.0.1:8181/v1/policies/example1' \
-H 'Content-Type: text/plain' \
-d 'package example1
import input.request
default allow = false
allow {
# HTTP method must GET
request.method == "GET"
}'
```
Then, you can configure the `opa` Plugin on a specific Route:
```shell
curl -X PUT 'http://127.0.0.1:9180/apisix/admin/routes/r1' \
-H 'X-API-KEY: <api-key>' \
-H 'Content-Type: application/json' \
-d '{
"uri": "/*",
"plugins": {
"opa": {
"host": "http://127.0.0.1:8181",
"policy": "example1"
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Now, to test it out:
```shell
curl -i -X GET 127.0.0.1:9080/get
```
```shell
HTTP/1.1 200 OK
```
Now if we try to make a request to a different endpoint the request will fail:
```
curl -i -X POST 127.0.0.1:9080/post
```
```shell
HTTP/1.1 403 FORBIDDEN
```
### Using custom response
You can also configure custom responses for more complex scenarios:
```shell
curl -X PUT '127.0.0.1:8181/v1/policies/example2' \
-H 'Content-Type: text/plain' \
-d 'package example2
import input.request
default allow = false
allow {
request.method == "GET"
}
# custom response body (Accepts a string or an object, the object will respond as JSON format)
reason = "test" {
not allow
}
# custom response header (The data of the object can be written in this way)
headers = {
"Location": "http://example.com/auth"
} {
not allow
}
# custom response status code
status_code = 302 {
not allow
}'
```
Now you can test it out by changing the `opa` Plugin's policy parameter to `example2` and then making a request:
```shell
curl -i -X GET 127.0.0.1:9080/get
```
```
HTTP/1.1 200 OK
```
Now if you make a failing request, you will see the custom response from the OPA service:
```
curl -i -X POST 127.0.0.1:9080/post
```
```
HTTP/1.1 302 FOUND
Location: http://example.com/auth
test
```
### Sending APISIX data
Let's think about another scenario, when your decision needs to use some APISIX data, such as `route`, `consumer`, etc., how should we do it?
If your OPA service needs to make decisions based on APISIX data like Route and Consumer details, you can configure the Plugin to do so.
The example below shows a simple `echo` policy which will return the data sent by APISIX as it is:
```shell
curl -X PUT '127.0.0.1:8181/v1/policies/echo' \
-H 'Content-Type: text/plain' \
-d 'package echo
allow = false
reason = input'
```
Now we can configure the Plugin on the Route to send APISIX data:
```shell
curl -X PUT 'http://127.0.0.1:9180/apisix/admin/routes/r1' \
-H 'X-API-KEY: <api-key>' \
-H 'Content-Type: application/json' \
-d '{
"uri": "/*",
"plugins": {
"opa": {
"host": "http://127.0.0.1:8181",
"policy": "echo",
"with_route": true
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Now if you make a request, you can see the data from the Route through the custom response:
```shell
curl -X GET 127.0.0.1:9080/get
{
"type": "http",
"request": {
xxx
},
"var": {
xxx
},
"route": {
xxx
}
}
```
## Delete Plugin
To remove the `opa` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,170 @@
---
title: openfunction
keywords:
- Apache APISIX
- API Gateway
- Plugin
- OpenFunction
description: This document contains information about the Apache APISIX openfunction Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `openfunction` Plugin is used to integrate APISIX with [CNCF OpenFunction](https://openfunction.dev/) serverless platform.
This Plugin can be configured on a Route and requests will be sent to the configured OpenFunction API endpoint as the upstream.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
| --------------------------- | ------- | -------- | ------- | ------------ | ---------------------------------------------------------------------------------------------------------- |
| function_uri | string | True | | | function uri. For example, `https://localhost:30858/default/function-sample`. |
| ssl_verify | boolean | False | true | | When set to `true` verifies the SSL certificate. |
| authorization | object | False | | | Authorization credentials to access functions of OpenFunction. |
| authorization.service_token | string | False | | | The token format is 'xx:xx' which supports basic auth for function entry points. |
| timeout | integer | False | 3000 ms | [100, ...] ms| OpenFunction action and HTTP call timeout in ms. |
| keepalive | boolean | False | true | | When set to `true` keeps the connection alive for reuse. |
| keepalive_timeout | integer | False | 60000 ms| [1000,...] ms| Time is ms for connection to remain idle without closing. |
| keepalive_pool | integer | False | 5 | [1,...] | Maximum number of requests that can be sent on this connection before closing it. |
:::note
The `timeout` attribute sets the time taken by the OpenFunction to execute, and the timeout for the HTTP client in APISIX. OpenFunction calls may take time to pull the runtime image and start the container. So, if the value is set too small, it may cause a large number of requests to fail.
:::
## Prerequisites
Before configuring the plugin, you need to have OpenFunction running.
Installation of OpenFunction requires a certain version Kubernetes cluster.
For details, please refer to [Installation](https://openfunction.dev/docs/getting-started/installation/).
### Create and Push a Function
You can then create a function following the [sample](https://github.com/OpenFunction/samples)
You'll need to push your function container image to a container registry like Docker Hub or Quay.io when building a function. To do that, you'll need to generate a secret for your container registry first.
```shell
REGISTRY_SERVER=https://index.docker.io/v1/ REGISTRY_USER= ${your_registry_user} REGISTRY_PASSWORD= ${your_registry_password}
kubectl create secret docker-registry push-secret \
--docker-server=$REGISTRY_SERVER \
--docker-username=$REGISTRY_USER \
--docker-password=$REGISTRY_PASSWORD
```
## Enable the Plugin
You can now configure the Plugin on a specific Route and point to this running OpenFunction service:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"plugins": {
"openfunction": {
"function_uri": "http://localhost:3233/default/function-sample/test",
"authorization": {
"service_token": "test:test"
}
}
}
}'
```
## Example usage
Once you have configured the plugin, you can send a request to the Route and it will invoke the configured function:
```shell
curl -i http://127.0.0.1:9080/hello
```
This will give back the response from the function:
```
hello, test!
```
### Configure Path Transforming
The `OpenFunction` Plugin also supports transforming the URL path while proxying requests to the OpenFunction API endpoints. Extensions to the base request path get appended to the `function_uri` specified in the Plugin configuration.
:::info IMPORTANT
The `uri` configured on a Route must end with `*` for this feature to work properly. APISIX Routes are matched strictly and the `*` implies that any subpath to this URI would be matched to the same Route.
:::
The example below configures this feature:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello/*",
"plugins": {
"openfunction": {
"function_uri": "http://localhost:3233/default/function-sample",
"authorization": {
"service_token": "test:test"
}
}
}
}'
```
Now, any requests to the path `hello/123` will invoke the OpenFunction, and the added path is forwarded:
```shell
curl http://127.0.0.1:9080/hello/123
```
```shell
Hello, 123!
```
## Delete Plugin
To remove the `openfunction` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,257 @@
---
title: openid-connect
keywords:
- Apache APISIX
- API Gateway
- OpenID Connect
- OIDC
description: The openid-connect Plugin supports the integration with OpenID Connect (OIDC) identity providers, such as Keycloak, Auth0, Microsoft Entra ID, Google, Okta, and more. It allows APISIX to authenticate clients and obtain their information from the identity provider before allowing or denying their access to upstream protected resources.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/openid-connect" />
</head>
## Description
The `openid-connect` Plugin supports the integration with [OpenID Connect (OIDC)](https://openid.net/connect/) identity providers, such as Keycloak, Auth0, Microsoft Entra ID, Google, Okta, and more. It allows APISIX to authenticate clients and obtain their information from the identity provider before allowing or denying their access to upstream protected resources.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|--------------------------------------|----------|----------|-----------------------|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| client_id | string | True | | | OAuth client ID. |
| client_secret | string | True | | | OAuth client secret. |
| discovery | string | True | | | URL to the well-known discovery document of the OpenID provider, which contains a list of OP API endpoints. The Plugin can directly utilize the endpoints from the discovery document. You can also configure these endpoints individually, which takes precedence over the endpoints supplied in the discovery document. |
| scope | string | False | openid | | OIDC scope that corresponds to information that should be returned about the authenticated user, also known as [claims](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims). This is used to authorize users with proper permission. The default value is `openid`, the required scope for OIDC to return a `sub` claim that uniquely identifies the authenticated user. Additional scopes can be appended and delimited by spaces, such as `openid email profile`. |
| required_scopes | array[string] | False | | | Scopes required to be present in the access token. Used in conjunction with the introspection endpoint when `bearer_only` is `true`. If any required scope is missing, the Plugin rejects the request with a 403 forbidden error. |
| realm | string | False | apisix | | Realm in [`WWW-Authenticate`](https://www.rfc-editor.org/rfc/rfc6750#section-3) response header accompanying a 401 unauthorized request due to invalid bearer token. |
| bearer_only | boolean | False | false | | If true, strictly require bearer access token in requests for authentication. |
| logout_path | string | False | /logout | | Path to activate the logout. |
| post_logout_redirect_uri | string | False | | | URL to redirect users to after the `logout_path` receive a request to log out. |
| redirect_uri | string | False | | | URI to redirect to after authentication with the OpenID provider. Note that the redirect URI should not be the same as the request URI, but a sub-path of the request URI. For example, if the `uri` of the Route is `/api/v1/*`, `redirect_uri` can be configured as `/api/v1/redirect`. If `redirect_uri` is not configured, APISIX will append `/.apisix/redirect` to the request URI to determine the value for `redirect_uri`. |
| timeout | integer | False | 3 | [1,...] | Request timeout time in seconds. |
| ssl_verify | boolean | False | false | | If true, verify the OpenID provider 's SSL certificates. |
| introspection_endpoint | string | False | | | URL of the [token introspection](https://datatracker.ietf.org/doc/html/rfc7662) endpoint for the OpenID provider used to introspect access tokens. If this is unset, the introspection endpoint presented in the well-known discovery document is used [as a fallback](https://github.com/zmartzone/lua-resty-openidc/commit/cdaf824996d2b499de4c72852c91733872137c9c). |
| introspection_endpoint_auth_method | string | False | client_secret_basic | | Authentication method for the token introspection endpoint. The value should be one of the authentication methods specified in the `introspection_endpoint_auth_methods_supported` [authorization server metadata](https://www.rfc-editor.org/rfc/rfc8414.html) as seen in the well-known discovery document, such as `client_secret_basic`, `client_secret_post`, `private_key_jwt`, and `client_secret_jwt`. |
| token_endpoint_auth_method | string | False | client_secret_basic | | Authentication method for the token endpoint. The value should be one of the authentication methods specified in the `token_endpoint_auth_methods_supported` [authorization server metadata](https://www.rfc-editor.org/rfc/rfc8414.html) as seen in the well-known discovery document, such as `client_secret_basic`, `client_secret_post`, `private_key_jwt`, and `client_secret_jwt`. If the configured method is not supported, fall back to the first method in the `token_endpoint_auth_methods_supported` array. |
| public_key | string | False | | | Public key used to verify JWT signature id asymmetric algorithm is used. Providing this value to perform token verification will skip token introspection in client credentials flow. You can pass the public key in `-----BEGIN PUBLIC KEY-----\\n……\\n-----END PUBLIC KEY-----` format. |
| use_jwks | boolean | False | false | | If true and if `public_key` is not set, use the JWKS to verify JWT signature and skip token introspection in client credentials flow. The JWKS endpoint is parsed from the discovery document. |
| use_pkce | boolean | False | false | | If true, use the Proof Key for Code Exchange (PKCE) for Authorization Code Flow as defined in [RFC 7636](https://datatracker.ietf.org/doc/html/rfc7636). |
| token_signing_alg_values_expected | string | False | | | Algorithm used for signing JWT, such as `RS256`. |
| set_access_token_header | boolean | False | true | | If true, set the access token in a request header. By default, the `X-Access-Token` header is used. |
| access_token_in_authorization_header | boolean | False | false | | If true and if `set_access_token_header` is also true, set the access token in the `Authorization` header. |
| set_id_token_header | boolean | False | true | | If true and if the ID token is available, set the value in the `X-ID-Token` request header. |
| set_userinfo_header | boolean | False | true | | If true and if user info data is available, set the value in the `X-Userinfo` request header. |
| set_refresh_token_header | boolean | False | false | | If true and if the refresh token is available, set the value in the `X-Refresh-Token` request header. |
| session | object | False | | | Session configuration used when `bearer_only` is `false` and the Plugin uses Authorization Code flow. |
| session.secret | string | True | | 16 or more characters | Key used for session encryption and HMAC operation when `bearer_only` is `false`. It is automatically generated and saved to etcd if not configured. When using APISIX in the standalone mode where etcd is no longer the configuration center, the `secret` should be configured. |
| session.cookie | object | False | | | Cookie configurations. |
| session.cookie.lifetime | integer | False | 3600 | | Cookie lifetime in seconds. |
| session_contents | object | False | | | Session content configurations. If unconfigured, all data will be stored in the session. |
| session_contents.access_token | boolean | False | | | If true, store the access token in the session. |
| session_contents.id_token | boolean | False | | | If true, store the ID token in the session. |
| session_contents.enc_id_token | boolean | False | | | If true, store the encrypted ID token in the session. |
| session_contents.user | boolean | False | | | If true, store the user info in the session. |
| unauth_action | string | False | auth | ["auth","deny","pass"] | Action for unauthenticated requests. When set to `auth`, redirect to the authentication endpoint of the OpenID provider. When set to `pass`, allow the request without authentication. When set to `deny`, return 401 unauthenticated responses rather than start the authorization code grant flow. |
| proxy_opts | object | False | | | Configurations for the proxy server that the OpenID provider is behind. |
| proxy_opts.http_proxy | string | False | | | Proxy server address for HTTP requests, such as `http://<proxy_host>:<proxy_port>`. |
| proxy_opts.https_proxy | string | False | | | Proxy server address for HTTPS requests, such as `http://<proxy_host>:<proxy_port>`. |
| proxy_opts.http_proxy_authorization | string | False | | Basic [base64 username:password] | Default `Proxy-Authorization` header value to be used with `http_proxy`. Can be overridden with custom `Proxy-Authorization` request header. |
| proxy_opts.https_proxy_authorization | string | False | | Basic [base64 username:password] | Default `Proxy-Authorization` header value to be used with `https_proxy`. Cannot be overridden with custom `Proxy-Authorization` request header since with HTTPS, the authorization is completed when connecting. |
| proxy_opts.no_proxy | string | False | | | Comma separated list of hosts that should not be proxied. |
| authorization_params | object | False | | | Additional parameters to send in the request to the authorization endpoint. |
| client_rsa_private_key | string | False | | | Client RSA private key used to sign JWT for authentication to the OP. Required when `token_endpoint_auth_method` is `private_key_jwt`. |
| client_rsa_private_key_id | string | False | | | Client RSA private key ID used to compute a signed JWT. Optional when `token_endpoint_auth_method` is `private_key_jwt`. |
| client_jwt_assertion_expires_in | integer | False | 60 | | Life duration of the signed JWT for authentication to the OP, in seconds. Used when `token_endpoint_auth_method` is `private_key_jwt` or `client_secret_jwt`. |
| renew_access_token_on_expiry | boolean | False | true | | If true, attempt to silently renew the access token when it expires or if a refresh token is available. If the token fails to renew, redirect user for re-authentication. |
| access_token_expires_in | integer | False | | | Lifetime of the access token in seconds if no `expires_in` attribute is present in the token endpoint response. |
| refresh_session_interval | integer | False | | | Time interval to refresh user ID token without requiring re-authentication. When not set, it will not check the expiration time of the session issued to the client by the gateway. If set to 900, it means refreshing the user's id_token (or session in the browser) after 900 seconds without requiring re-authentication. |
| iat_slack | integer | False | 120 | | Tolerance of clock skew in seconds with the `iat` claim in an ID token. |
| accept_none_alg | boolean | False | false | | Set to true if the OpenID provider does not sign its ID token, such as when the signature algorithm is set to `none`. |
| accept_unsupported_alg | boolean | False | true | | If true, ignore ID token signature to accept unsupported signature algorithm. |
| access_token_expires_leeway | integer | False | 0 | | Expiration leeway in seconds for access token renewal. When set to a value greater than 0, token renewal will take place the set amount of time before token expiration. This avoids errors in case the access token just expires when arriving to the resource server. |
| force_reauthorize | boolean | False | false | | If true, execute the authorization flow even when a token has been cached. |
| use_nonce | boolean | False | false | | If true, enable nonce parameter in authorization request. |
| revoke_tokens_on_logout | boolean | False | false | | If true, notify the authorization server a previously obtained refresh or access token is no longer needed at the revocation endpoint. |
| jwk_expires_in | integer | False | 86400 | | Expiration time for JWK cache in seconds. |
| jwt_verification_cache_ignore | boolean | False | false | | If true, force re-verification for a bearer token and ignore any existing cached verification results. |
| cache_segment | string | False | | | Optional name of a cache segment, used to separate and differentiate caches used by token introspection or JWT verification. |
| introspection_interval | integer | False | 0 | | TTL of the cached and introspected access token in seconds. The default value is 0, which means this option is not used and the Plugin defaults to use the TTL passed by expiry claim defined in `introspection_expiry_claim`. If `introspection_interval` is larger than 0 and less than the TTL passed by expiry claim defined in `introspection_expiry_claim`, use `introspection_interval`. |
| introspection_expiry_claim | string | False | exp | | Name of the expiry claim, which controls the TTL of the cached and introspected access token. |
| introspection_addon_headers | array[string] | False | | | Used to append additional header values to the introspection HTTP request. If the specified header does not exist in origin request, value will not be appended. |
| claim_validator.issuer.valid_issuers | string[] | False | | | Whitelist the vetted issuers of the jwt. When not passed by the user, the issuer returned by discovery endpoint will be used. In case both are missing, the issuer will not be validated. |
NOTE: `encrypt_fields = {"client_secret"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
## Examples
The examples below demonstrate how you can configure the `openid-connect` Plugin for different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Authorization Code Flow
The authorization code flow is defined in [RFC 6749, Section 4.1](https://datatracker.ietf.org/doc/html/rfc6749#section-4.1). It involves exchanging an temporary authorization code for an access token, and is typically used by confidential and public clients.
The following diagram illustrates the interaction between different entities when you implement the authorization code flow:
![Authorization code flow diagram](https://static.api7.ai/uploads/2023/11/27/Ga2402sb_oidc-code-auth-flow-revised.png)
When an incoming request does not contain an access token in its header nor in an appropriate session cookie, the Plugin acts as a relying party and redirects to the authorization server to continue the authorization code flow.
After successful authentication, the Plugin keeps the token in the session cookie, and subsequent requests will use the token stored in the cookie.
See [Implement Authorization Code Grant](../tutorials/keycloak-oidc.md#implement-authorization-code-grant) for an example to use the `openid-connect` Plugin to integrate with Keycloak using the authorization code flow.
### Proof Key for Code Exchange (PKCE)
The Proof Key for Code Exchange (PKCE) is defined in [RFC 7636](https://datatracker.ietf.org/doc/html/rfc7636). PKCE enhances the authorization code flow by adding a code challenge and verifier to prevent authorization code interception attacks.
The following diagram illustrates the interaction between different entities when you implement the authorization code flow with PKCE:
![Authorization code flow with PKCE diagram](https://static.api7.ai/uploads/2024/11/04/aJ2ZVuTC_auth-code-with-pkce.png)
See [Implement Authorization Code Grant](../tutorials/keycloak-oidc.md#implement-authorization-code-grant) for an example to use the `openid-connect` Plugin to integrate with Keycloak using the authorization code flow with PKCE.
### Client Credential Flow
The client credential flow is defined in [RFC 6749, Section 4.4](https://datatracker.ietf.org/doc/html/rfc6749#section-4.4). It involves clients requesting an access token with its own credentials to access protected resources, typically used in machine to machine authentication and is not on behalf of a specific user.
The following diagram illustrates the interaction between different entities when you implement the client credential flow:
<div style={{textAlign: 'center'}}>
<img src="https://static.api7.ai/uploads/2024/10/28/sbHxqnOz_client-credential-no-introspect.png" alt="Client credential flow diagram" style={{width: '70%'}} />
</div>
<br />
See [Implement Client Credentials Grant](../tutorials/keycloak-oidc.md#implement-client-credentials-grant) for an example to use the `openid-connect` Plugin to integrate with Keycloak using the client credentials flow.
### Introspection Flow
The introspection flow is defined in [RFC 7662](https://datatracker.ietf.org/doc/html/rfc7662). It involves verifying the validity and details of an access token by querying an authorization servers introspection endpoint.
In this flow, when a client presents an access token to the resource server, the resource server sends a request to the authorization servers introspection endpoint, which responds with token details if the token is active, including information like token expiration, associated scopes, and the user or client it belongs to.
The following diagram illustrates the interaction between different entities when you implement the authorization code flow with token introspection:
<br />
<div style={{textAlign: 'center'}}>
<img src="https://static.api7.ai/uploads/2024/10/29/Y2RWIUV9_client-cred-flow-introspection.png" alt="Client credential with introspection diagram" style={{width: '55%'}} />
</div>
<br />
See [Implement Client Credentials Grant](../tutorials/keycloak-oidc.md#implement-client-credentials-grant) for an example to use the `openid-connect` Plugin to integrate with Keycloak using the client credentials flow with token introspection.
### Password Flow
The password flow is defined in [RFC 6749, Section 4.3](https://datatracker.ietf.org/doc/html/rfc6749#section-4.3). It is designed for trusted applications, allowing them to obtain an access token directly using a users username and password. In this grant type, the client app sends the users credentials along with its own client ID and secret to the authorization server, which then authenticates the user and, if valid, issues an access token.
Though efficient, this flow is intended for highly trusted, first-party applications only, as it requires the app to handle sensitive user credentials directly, posing significant security risks if used in third-party contexts.
The following diagram illustrates the interaction between different entities when you implement the password flow:
<div style={{textAlign: 'center'}}>
<img src="https://static.api7.ai/uploads/2024/10/30/njkWZVgX_pass-grant.png" alt="Password flow diagram" style={{width: '70%'}} />
</div>
<br />
See [Implement Password Grant](../tutorials/keycloak-oidc.md#implement-password-grant) for an example to use the `openid-connect` Plugin to integrate with Keycloak using the password flow.
### Refresh Token Grant
The refresh token grant is defined in [RFC 6749, Section 6](https://datatracker.ietf.org/doc/html/rfc6749#section-6). It enables clients to request a new access token without requiring the user to re-authenticate, using a previously issued refresh token. This flow is typically used when an access token expires, allowing the client to maintain continuous access to resources without user intervention. Refresh tokens are issued along with access tokens in certain OAuth flows and their lifespan and security requirements depend on the authorization servers configuration.
The following diagram illustrates the interaction between different entities when implementing password flow with refresh token flow:
<div style={{textAlign: 'center'}}>
<img src="https://static.api7.ai/uploads/2024/10/30/YBF7rI6M_password-with-refresh-token.png" alt="Password grant with refresh token flow diagram" style={{width: '100%'}} />
</div>
<br />
See [Refresh Token](../tutorials/keycloak-oidc.md#refresh-token) for an example to use the `openid-connect` Plugin to integrate with Keycloak using the password flow with token refreshes.
## Troubleshooting
This section covers a few commonly seen issues when working with this Plugin to help you troubleshoot.
### APISIX Cannot Connect to OpenID provider
If APISIX fails to resolve or cannot connect to the OpenID provider, double check the DNS settings in your configuration file `config.yaml` and modify as needed.
### No Session State Found
If you encounter a `500 internal server error` with the following message in the log when working with [authorization code flow](#authorization-code-flow), there could be a number of reasons.
```text
the error request to the redirect_uri path, but there's no session state found
```
#### 1. Misconfigured Redirection URI
A common misconfiguration is to configure the `redirect_uri` the same as the URI of the route. When a user initiates a request to visit the protected resource, the request directly hits the redirection URI with no session cookie in the request, which leads to the no session state found error.
To properly configure the redirection URI, make sure that the `redirect_uri` matches the Route where the Plugin is configured, without being fully identical. For instance, a correct configuration would be to configure `uri` of the Route to `/api/v1/*` and the path portion of the `redirect_uri` to `/api/v1/redirect`.
You should also ensure that the `redirect_uri` include the scheme, such as `http` or `https`.
#### 2. Missing Session Secret
If you deploy APISIX in the [standalone mode](/apisix/production/deployment-modes#standalone-mode), make sure that `session.secret` is configured.
User sessions are stored in browser as cookies and encrypted with session secret. The secret is automatically generated and saved to etcd if no secret is configured through the `session.secret` attribute. However, in standalone mode, etcd is no longer the configuration center. Therefore, you should explicitly configure `session.secret` for this Plugin in the YAML configuration center `apisix.yaml`.
#### 3. Cookie Not Sent or Absent
Check if the [`SameSite`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#samesitesamesite-value) cookie attribute is properly set (i.e. if your application needs to send the cookie cross sites) to see if this could be a factor that prevents the cookie being saved to the browser's cookie jar or being sent from the browser.
#### 4. Upstream Sent Too Big Header
If you have NGINX sitting in front of APISIX to proxy client traffic, see if you observe the following error in NGINX's `error.log`:
```text
upstream sent too big header while reading response header from upstream
```
If so, try adjusting `proxy_buffers`, `proxy_buffer_size`, and `proxy_busy_buffers_size` to larger values.
Another option is to configure the `session_content` attribute to adjust which data to store in session. For instance, you can set `session_content.access_token` to `true`.
#### 5. Invalid Client Secret
Verify if `client_secret` is valid and correct. An invalid `client_secret` would lead to an authentication failure and no token shall be returned and stored in session.
#### 6. PKCE IdP Configuration
If you are enabling PKCE with the authorization code flow, make sure you have configured the IdP client to use PKCE. For example, in Keycloak, you should configure the PKCE challenge method in the client's advanced settings:
<div style={{textAlign: 'center'}}>
<img src="https://static.api7.ai/uploads/2024/11/04/xvnCNb20_pkce-keycloak-revised.jpeg" alt="PKCE keycloak configuration" style={{width: '70%'}} />
</div>

View File

@@ -0,0 +1,220 @@
---
title: opentelemetry
keywords:
- Apache APISIX
- API Gateway
- Plugin
- OpenTelemetry
description: The opentelemetry Plugin instruments APISIX and sends traces to OpenTelemetry collector based on the OpenTelemetry specification, in binary-encoded OLTP over HTTP.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/opentelemetry" />
</head>
## Description
The `opentelemetry` Plugin can be used to report tracing data according to the [OpenTelemetry Specification](https://opentelemetry.io/docs/reference/specification/).
The Plugin only supports binary-encoded [OLTP over HTTP](https://opentelemetry.io/docs/reference/specification/protocol/otlp/#otlphttp).
## Configurations
By default, configurations of the Service name, tenant ID, collector, and batch span processor are pre-configured in [default configuration](https://github.com/apache/apisix/blob/master/apisix/cli/config.lua).
You can change this configuration of the Plugin through the endpoint `apisix/admin/plugin_metadata/opentelemetry` For example:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/opentelemetry -H "X-API-KEY: $admin_key" -X PUT -d '
{
"trace_id_source": "x-request-id",
"resource": {
"service.name": "APISIX"
},
"collector": {
"address": "127.0.0.1:4318",
"request_timeout": 3,
"request_headers": {
"Authorization": "token"
}
},
"batch_span_processor": {
"drop_on_queue_full": false,
"max_queue_size": 1024,
"batch_timeout": 2,
"inactive_timeout": 1,
"max_export_batch_size": 16
},
"set_ngx_var": false
}'
```
## Attributes
| Name | Type | Required | Default | Valid Values | Description |
|---------------------------------------|---------------|----------|--------------|--------------|-------------|
| sampler | object | False | - | - | Sampling configuration. |
| sampler.name | string | False | `always_off` | ["always_on", "always_off", "trace_id_ratio", "parent_base"] | Sampling strategy.<br />To always sample, use `always_on`.<br />To never sample, use `always_off`.<br />To randomly sample based on a given ratio, use `trace_id_ratio`.<br />To use the sampling decision of the span's parent, use `parent_base`. If there is no parent, use the root sampler. |
| sampler.options | object | False | - | - | Parameters for sampling strategy. |
| sampler.options.fraction | number | False | 0 | [0, 1] | Sampling ratio when the sampling strategy is `trace_id_ratio`. |
| sampler.options.root | object | False | - | - | Root sampler when the sampling strategy is `parent_base` strategy. |
| sampler.options.root.name | string | False | - | ["always_on", "always_off", "trace_id_ratio"] | Root sampling strategy. |
| sampler.options.root.options | object | False | - | - | Root sampling strategy parameters. |
| sampler.options.root.options.fraction | number | False | 0 | [0, 1] | Root sampling ratio when the sampling strategy is `trace_id_ratio`. |
| additional_attributes | array[string] | False | - | - | Additional attributes appended to the trace span. Support [built-in variables](https://apisix.apache.org/docs/apisix/apisix-variable/) in values. |
| additional_header_prefix_attributes | array[string] | False | - | - | Headers or header prefixes appended to the trace span's attributes. For example, use `x-my-header"` or `x-my-headers-*` to include all headers with the prefix `x-my-headers-`. |
## Examples
The examples below demonstrate how you can work with the `opentelemetry` Plugin for different scenarios.
### Enable `opentelemetry` Plugin
By default, the `opentelemetry` Plugin is disabled in APISIX. To enable, add the Plugin to your configuration file as such:
```yaml title="config.yaml"
plugins:
- ...
- opentelemetry
```
Reload APISIX for changes to take effect.
See [static configurations](#static-configurations) for other available options you can configure in `config.yaml`.
### Send Traces to OpenTelemetry
The following example demonstrates how to trace requests to a Route and send traces to OpenTelemetry.
Start an OpenTelemetry collector instance in Docker:
```shell
docker run -d --name otel-collector -p 4318:4318 otel/opentelemetry-collector-contrib
```
Create a Route with `opentelemetry` Plugin:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "otel-tracing-route",
"uri": "/anything",
"plugins": {
"opentelemetry": {
"sampler": {
"name": "always_on"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
```
Send a request to the Route:
```shell
curl "http://127.0.0.1:9080/anything"
```
You should receive an `HTTP/1.1 200 OK` response.
In OpenTelemetry collector's log, you should see information similar to the following:
```text
2024-02-18T17:14:03.825Z info ResourceSpans #0
Resource SchemaURL:
Resource attributes:
-> telemetry.sdk.language: Str(lua)
-> telemetry.sdk.name: Str(opentelemetry-lua)
-> telemetry.sdk.version: Str(0.1.1)
-> hostname: Str(e34673e24631)
-> service.name: Str(APISIX)
ScopeSpans #0
ScopeSpans SchemaURL:
InstrumentationScope opentelemetry-lua
Span #0
Trace ID : fbd0a38d4ea4a128ff1a688197bc58b0
Parent ID :
ID : af3dc7642104748a
Name : GET /anything
Kind : Server
Start time : 2024-02-18 17:14:03.763244032 +0000 UTC
End time : 2024-02-18 17:14:03.920229888 +0000 UTC
Status code : Unset
Status message :
Attributes:
-> net.host.name: Str(127.0.0.1)
-> http.method: Str(GET)
-> http.scheme: Str(http)
-> http.target: Str(/anything)
-> http.user_agent: Str(curl/7.64.1)
-> apisix.route_id: Str(otel-tracing-route)
-> apisix.route_name: Empty()
-> http.route: Str(/anything)
-> http.status_code: Int(200)
{"kind": "exporter", "data_type": "traces", "name": "debug"}
```
To visualize these traces, you can export your telemetry to backend Services, such as Zipkin and Prometheus. See [exporters](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter) for more details.
### Using Trace Variables in Logging
The following example demonstrates how to configure the `opentelemetry` Plugin to set the following built-in variables, which can be used in logger Plugins or access logs:
- `opentelemetry_context_traceparent`: [trace parent](https://www.w3.org/TR/trace-context/#trace-context-http-headers-format) ID
- `opentelemetry_trace_id`: trace ID of the current span
- `opentelemetry_span_id`: span ID of the current span
Update the configuration file as below. You should customize the access log format to use the `opentelemetry` Plugin variables, and set `opentelemetry` variables in the `set_ngx_var` field.
```yaml title="conf/config.yaml"
nginx_config:
http:
enable_access_log: true
access_log_format: '{"time": "$time_iso8601","opentelemetry_context_traceparent": "$opentelemetry_context_traceparent","opentelemetry_trace_id": "$opentelemetry_trace_id","opentelemetry_span_id": "$opentelemetry_span_id","remote_addr": "$remote_addr"}'
access_log_format_escape: json
plugin_attr:
opentelemetry:
set_ngx_var: true
```
Reload APISIX for configuration changes to take effect.
You should see access log entries similar to the following when you generate requests:
```text
{"time": "18/Feb/2024:15:09:00 +0000","opentelemetry_context_traceparent": "00-fbd0a38d4ea4a128ff1a688197bc58b0-8f4b9d9970a02629-01","opentelemetry_trace_id": "fbd0a38d4ea4a128ff1a688197bc58b0","opentelemetry_span_id": "af3dc7642104748a","remote_addr": "172.10.0.1"}
```

View File

@@ -0,0 +1,139 @@
---
title: openwhisk
keywords:
- Apache APISIX
- API Gateway
- Plugin
- OpenWhisk
description: This document contains information about the Apache openwhisk Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `openwhisk` Plugin is used to integrate APISIX with [Apache OpenWhisk](https://openwhisk.apache.org) serverless platform.
This Plugin can be configured on a Route and requests will be send to the configured OpenWhisk API endpoint as the upstream.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
| ----------------- | ------- | -------- | ------- | ------------ | ---------------------------------------------------------------------------------------------------------- |
| api_host | string | True | | | OpenWhisk API host address. For example, `https://localhost:3233`. |
| ssl_verify | boolean | False | true | | When set to `true` verifies the SSL certificate. |
| service_token | string | True | | | OpenWhisk service token. The format is `xxx:xxx` and it is passed through basic auth when calling the API. |
| namespace | string | True | | | OpenWhisk namespace. For example `guest`. |
| action | string | True | | | OpenWhisk action. For example `hello`. |
| result | boolean | False | true | | When set to `true` gets the action metadata (executes the function and gets response). |
| timeout | integer | False | 60000ms | [1, 60000]ms | OpenWhisk action and HTTP call timeout in ms. |
| keepalive | boolean | False | true | | When set to `true` keeps the connection alive for reuse. |
| keepalive_timeout | integer | False | 60000ms | [1000,...]ms | Time is ms for connection to remain idle without closing. |
| keepalive_pool | integer | False | 5 | [1,...] | Maximum number of requests that can be sent on this connection before closing it. |
:::note
The `timeout` attribute sets the time taken by the OpenWhisk action to execute, and the timeout for the HTTP client in APISIX. OpenWhisk action calls may take time to pull the runtime image and start the container. So, if the value is set too small, it may cause a large number of requests to fail.
OpenWhisk supports timeouts in the range 1ms to 60000ms and it is recommended to set it to at least 1000ms.
:::
## Enable Plugin
Before configuring the Plugin, you need to have OpenWhisk running. The example below shows OpenWhisk in standalone mode:
```shell
docker run --rm -d \
-h openwhisk --name openwhisk \
-p 3233:3233 -p 3232:3232 \
-v /var/run/docker.sock:/var/run/docker.sock \
openwhisk/standalone:nightly
docker exec openwhisk waitready
```
Install the [openwhisk-cli](https://github.com/apache/openwhisk-cli) utility.
You can download the released executable binaries wsk for Linux systems from the [openwhisk-cli](https://github.com/apache/openwhisk-cli) repository.
You can then create an action to test:
```shell
wsk property set --apihost "http://localhost:3233" --auth "23bc46b1-71f6-4ed5-8c54-816aa4f8c502:123zO3xZCLrMN6v2BKK1dXYFpXlPkccOFqm12CdAsMgRU4VrNZ9lyGVCGuMDGIwP"
wsk action update test <(echo 'function main(){return {"ready":true}}') --kind nodejs:14
```
You can now configure the Plugin on a specific Route and point to this running OpenWhisk service:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"plugins": {
"openwhisk": {
"api_host": "http://localhost:3233",
"service_token": "23bc46b1-71f6-4ed5-8c54-816aa4f8c502:123zO3xZCLrMN6v2BKK1dXYFpXlPkccOFqm12CdAsMgRU4VrNZ9lyGVCGuMDGIwP",
"namespace": "guest",
"action": "test"
}
}
}'
```
## Example usage
Once you have configured the Plugin, you can send a request to the Route and it will invoke the configured action:
```shell
curl -i http://127.0.0.1:9080/hello
```
This will give back the response from the action:
```json
{ "ready": true }
```
## Delete Plugin
To remove the `openwhisk` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,476 @@
---
title: prometheus
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Prometheus
description: The prometheus Plugin provides the capability to integrate APISIX with Prometheus for metric collection and continuous monitoring.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/prometheus" />
</head>
## Description
The `prometheus` Plugin provides the capability to integrate APISIX with [Prometheus](https://prometheus.io).
After enabling the Plugin, APISIX will start collecting relevant metrics, such as API requests and latencies, and exporting them in a [text-based exposition format](https://prometheus.io/docs/instrumenting/exposition_formats/#exposition-formats) to Prometheus. You can then create event monitoring and alerting in Prometheus to monitor the health of your API gateway and APIs.
## Static Configurations
By default, `prometheus` configurations are pre-configured in the [default configuration](https://github.com/apache/apisix/blob/master/apisix/cli/config.lua).
To customize these values, add the corresponding configurations to `config.yaml`. For example:
```yaml
plugin_attr:
prometheus: # Plugin: prometheus attributes
export_uri: /apisix/prometheus/metrics # Set the URI for the Prometheus metrics endpoint.
metric_prefix: apisix_ # Set the prefix for Prometheus metrics generated by APISIX.
enable_export_server: true # Enable the Prometheus export server.
export_addr: # Set the address for the Prometheus export server.
ip: 127.0.0.1 # Set the IP.
port: 9091 # Set the port.
# metrics: # Create extra labels for metrics.
# http_status: # These metrics will be prefixed with `apisix_`.
# extra_labels: # Set the extra labels for http_status metrics.
# - upstream_addr: $upstream_addr
# - status: $upstream_status
# expire: 0 # The expiration time of metrics in seconds.
# 0 means the metrics will not expire.
# http_latency:
# extra_labels: # Set the extra labels for http_latency metrics.
# - upstream_addr: $upstream_addr
# expire: 0 # The expiration time of metrics in seconds.
# 0 means the metrics will not expire.
# bandwidth:
# extra_labels: # Set the extra labels for bandwidth metrics.
# - upstream_addr: $upstream_addr
# expire: 0 # The expiration time of metrics in seconds.
# 0 means the metrics will not expire.
# default_buckets: # Set the default buckets for the `http_latency` metrics histogram.
# - 10
# - 50
# - 100
# - 200
# - 500
# - 1000
# - 2000
# - 5000
# - 10000
# - 30000
# - 60000
# - 500
```
You can use the [Nginx variable](https://nginx.org/en/docs/http/ngx_http_core_module.html) to create `extra_labels`. See [add extra labels](#add-extra-labels-for-metrics).
Reload APISIX for changes to take effect.
## Attribute
| Name | Type | Required | Default | Valid values | Description |
| ------------- | ------- | -------- | ------- | ------------ | ------------------------------------------ |
| prefer_name | boolean | | False | | If true, export Route/Service name instead of their ID in Prometheus metrics. |
## Metrics
There are different types of metrics in Prometheus. To understand their differences, see [metrics types](https://prometheus.io/docs/concepts/metric_types/).
The following metrics are exported by the `prometheus` Plugin by default. See [get APISIX metrics](#get-apisix-metrics) for an example. Note that some metrics, such as `apisix_batch_process_entries`, are not readily visible if there are no data.
| Name | Type | Description |
| ------------------------------ | --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| apisix_bandwidth | counter | Total amount of traffic flowing through APISIX in bytes. |
| apisix_etcd_modify_indexes | gauge | Number of changes to etcd by APISIX keys. |
| apisix_batch_process_entries | gauge | Number of remaining entries in a batch when sending data in batches, such as with `http logger`, and other logging Plugins. |
| apisix_etcd_reachable | gauge | Whether APISIX can reach etcd. A value of `1` represents reachable and `0` represents unreachable. |
| apisix_http_status | counter | HTTP status codes returned from upstream Services. |
| apisix_http_requests_total | gauge | Number of HTTP requests from clients. |
| apisix_nginx_http_current_connections | gauge | Number of current connections with clients. |
| apisix_nginx_metric_errors_total | counter | Total number of `nginx-lua-prometheus` errors. |
| apisix_http_latency | histogram | HTTP request latency in milliseconds. |
| apisix_node_info | gauge | Information of the APISIX node, such as host name and the current APISIX version. |
| apisix_shared_dict_capacity_bytes | gauge | The total capacity of an [NGINX shared dictionary](https://github.com/openresty/lua-nginx-module#ngxshareddict). |
| apisix_shared_dict_free_space_bytes | gauge | The remaining space in an [NGINX shared dictionary](https://github.com/openresty/lua-nginx-module#ngxshareddict). |
| apisix_upstream_status | gauge | Health check status of upstream nodes, available if health checks are configured on the upstream. A value of `1` represents healthy and `0` represents unhealthy. |
| apisix_stream_connection_total | counter | Total number of connections handled per Stream Route. |
## Labels
[Labels](https://prometheus.io/docs/practices/naming/#labels) are attributes of metrics that are used to differentiate metrics.
For example, the `apisix_http_status` metric can be labeled with `route` information to identify which Route the HTTP status originates from.
The following are labels for a non-exhaustive list of APISIX metrics and their descriptions.
### Labels for `apisix_http_status`
The following labels are used to differentiate `apisix_http_status` metrics.
| Name | Description |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------- |
| code | HTTP response code returned by the upstream node. |
| route | ID of the Route that the HTTP status originates from when `prefer_name` is `false` (default), and name of the Route when `prefer_name` to `true`. Default to an empty string if a request does not match any Route. |
| matched_uri | URI of the Route that matches the request. Default to an empty string if a request does not match any Route. |
| matched_host | Host of the Route that matches the request. Default to an empty string if a request does not match any Route, or host is not configured on the Route. |
| service | ID of the Service that the HTTP status originates from when `prefer_name` is `false` (default), and name of the Service when `prefer_name` to `true`. Default to the configured value of host on the Route if the matched Route does not belong to any Service. |
| consumer | Name of the Consumer associated with a request. Default to an empty string if no Consumer is associated with the request. |
| node | IP address of the upstream node. |
### Labels for `apisix_bandwidth`
The following labels are used to differentiate `apisix_bandwidth` metrics.
| Name | Description |
| ---------- | ----------------------------------------------------------------------------------------------------------------------------- |
| type | Type of traffic, `egress` or `ingress`. |
| route | ID of the Route that bandwidth corresponds to when `prefer_name` is `false` (default), and name of the Route when `prefer_name` to `true`. Default to an empty string if a request does not match any Route. |
| service | ID of the Service that bandwidth corresponds to when `prefer_name` is `false` (default), and name of the Service when `prefer_name` to `true`. Default to the configured value of host on the Route if the matched Route does not belong to any Service. |
| consumer | Name of the Consumer associated with a request. Default to an empty string if no Consumer is associated with the request. |
| node | IP address of the upstream node. |
### Labels for `apisix_http_latency`
The following labels are used to differentiate `apisix_http_latency` metrics.
| Name | Description |
| ---------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| type | Type of latencies. See [latency types](#latency-types) for details. |
| route | ID of the Route that latencies correspond to when `prefer_name` is `false` (default), and name of the Route when `prefer_name` to `true`. Default to an empty string if a request does not match any Route. |
| service | ID of the Service that latencies correspond to when `prefer_name` is `false` (default), and name of the Service when `prefer_name` to `true`. Default to the configured value of host on the Route if the matched Route does not belong to any Service. |
| consumer | Name of the Consumer associated with latencies. Default to an empty string if no Consumer is associated with the request. |
| node | IP address of the upstream node associated with latencies. |
#### Latency Types
`apisix_http_latency` can be labeled with one of the three types:
* `request` represents the time elapsed between the first byte was read from the client and the log write after the last byte was sent to the client.
* `upstream` represents the time elapsed waiting on responses from the upstream Service.
* `apisix` represents the difference between the `request` latency and `upstream` latency.
In other words, the APISIX latency is not only attributed to the Lua processing. It should be understood as follows:
```text
APISIX latency
= downstream request time - upstream response time
= downstream traffic latency + NGINX latency
```
### Labels for `apisix_upstream_status`
The following labels are used to differentiate `apisix_upstream_status` metrics.
| Name | Description |
| ---------- | --------------------------------------------------------------------------------------------------- |
| name | Resource ID corresponding to the upstream configured with health checks, such as `/apisix/routes/1` and `/apisix/upstreams/1`. |
| ip | IP address of the upstream node. |
| port | Port number of the node. |
## Examples
The examples below demonstrate how you can work with the `prometheus` Plugin for different scenarios.
### Get APISIX Metrics
The following example demonstrates how you can get metrics from APISIX.
The default Prometheus metrics endpoint and other Prometheus related configurations can be found in the [static configuration](#static-configurations). If you would like to customize these configuration, update `config.yaml` and reload APISIX.
If you deploy APISIX in a containerized environment and would like to access the Prometheus metrics endpoint externally, update the configuration file as follows and reload APISIX:
```yaml title="conf/config.yaml"
plugin_attr:
prometheus:
export_addr:
ip: 0.0.0.0
```
Send a request to the APISIX Prometheus metrics endpoint:
```shell
curl "http://127.0.0.1:9091/apisix/prometheus/metrics"
```
You should see an output similar to the following:
```text
# HELP apisix_bandwidth Total bandwidth in bytes consumed per Service in Apisix
# TYPE apisix_bandwidth counter
apisix_bandwidth{type="egress",route="",service="",consumer="",node=""} 8417
apisix_bandwidth{type="egress",route="1",service="",consumer="",node="127.0.0.1"} 1420
apisix_bandwidth{type="egress",route="2",service="",consumer="",node="127.0.0.1"} 1420
apisix_bandwidth{type="ingress",route="",service="",consumer="",node=""} 189
apisix_bandwidth{type="ingress",route="1",service="",consumer="",node="127.0.0.1"} 332
apisix_bandwidth{type="ingress",route="2",service="",consumer="",node="127.0.0.1"} 332
# HELP apisix_etcd_modify_indexes Etcd modify index for APISIX keys
# TYPE apisix_etcd_modify_indexes gauge
apisix_etcd_modify_indexes{key="consumers"} 0
apisix_etcd_modify_indexes{key="global_rules"} 0
...
```
### Expose APISIX Metrics on Public API Endpoint
The following example demonstrates how you can disable the Prometheus export server that, by default, exposes an endpoint on port `9091`, and expose APISIX Prometheus metrics on a new public API endpoint on port `9080`, which APISIX uses to listen to other client requests.
Disable the Prometheus export server in the configuration file and reload APISIX for changes to take effect:
```yaml title="conf/config.yaml"
plugin_attr:
prometheus:
enable_export_server: false
```
Next, create a Route with [`public-api`](../../../en/latest/plugins/public-api.md) Plugin and expose a public API endpoint for APISIX metrics:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes/prometheus-metrics" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"uri": "/apisix/prometheus/metrics",
"plugins": {
"public-api": {}
}
}'
```
Send a request to the new metrics endpoint to verify:
```shell
curl "http://127.0.0.1:9080/apisix/prometheus/metrics"
```
You should see an output similar to the following:
```text
# HELP apisix_http_requests_total The total number of client requests since APISIX started
# TYPE apisix_http_requests_total gauge
apisix_http_requests_total 1
# HELP apisix_nginx_http_current_connections Number of HTTP connections
# TYPE apisix_nginx_http_current_connections gauge
apisix_nginx_http_current_connections{state="accepted"} 1
apisix_nginx_http_current_connections{state="active"} 1
apisix_nginx_http_current_connections{state="handled"} 1
apisix_nginx_http_current_connections{state="reading"} 0
apisix_nginx_http_current_connections{state="waiting"} 0
apisix_nginx_http_current_connections{state="writing"} 1
...
```
### Monitor Upstream Health Statuses
The following example demonstrates how to monitor the health status of upstream nodes.
Create a Route with the `prometheus` Plugin and configure upstream active health checks:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "prometheus-route",
"uri": "/get",
"plugins": {
"prometheus": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1,
"127.0.0.1:20001": 1
},
"checks": {
"active": {
"timeout": 5,
"http_path": "/status",
"healthy": {
"interval": 2,
"successes": 1
},
"unhealthy": {
"interval": 1,
"http_failures": 2
}
},
"passive": {
"healthy": {
"http_statuses": [200, 201],
"successes": 3
},
"unhealthy": {
"http_statuses": [500],
"http_failures": 3,
"tcp_failures": 3
}
}
}
}
}'
```
Send a request to the APISIX Prometheus metrics endpoint:
```shell
curl "http://127.0.0.1:9091/apisix/prometheus/metrics"
```
You should see an output similar to the following:
```text
# HELP apisix_upstream_status upstream status from health check
# TYPE apisix_upstream_status gauge
apisix_upstream_status{name="/apisix/routes/1",ip="54.237.103.220",port="80"} 1
apisix_upstream_status{name="/apisix/routes/1",ip="127.0.0.1",port="20001"} 0
```
This shows that the upstream node `httpbin.org:80` is healthy and the upstream node `127.0.0.1:20001` is unhealthy.
### Add Extra Labels for Metrics
The following example demonstrates how to add additional labels to metrics and use the [Nginx variable](https://nginx.org/en/docs/http/ngx_http_core_module.html) in label values.
Currently, only the following metrics support extra labels:
* apisix_http_status
* apisix_http_latency
* apisix_bandwidth
Include the following configurations in the configuration file to add labels for metrics and reload APISIX for changes to take effect:
```yaml title="conf/config.yaml"
plugin_attr:
prometheus: # Plugin: prometheus
metrics: # Create extra labels from the NGINX variables.
http_status:
extra_labels: # Set the extra labels for http_status metrics.
- upstream_addr: $upstream_addr # Add an extra upstream_addr label with value being the NGINX variable $upstream_addr.
- route_name: $route_name # Add an extra route_name label with value being the APISIX variable $route_name.
```
Note that if you define a variable in the label value but it does not correspond to any existing [APISIX variables](https://apisix.apache.org/docs/apisix/apisix-variable/) and [Nginx variable](https://nginx.org/en/docs/http/ngx_http_core_module.html), the label value will default to an empty string.
Create a Route with the `prometheus` Plugin:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "prometheus-route",
Include the following configurations in the configuration file to add labels for metrics and reload APISIX for changes to take effect:
"name": "extra-label",
"plugins": {
"prometheus": {}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route to verify:
```shell
curl -i "http://127.0.0.1:9080/get"
```
You should see an `HTTP/1.1 200 OK` response.
Send a request to the APISIX Prometheus metrics endpoint:
```shell
curl "http://127.0.0.1:9091/apisix/prometheus/metrics"
```
You should see an output similar to the following:
```text
# HELP apisix_http_status HTTP status codes per Service in APISIX
# TYPE apisix_http_status counter
apisix_http_status{code="200",route="1",matched_uri="/get",matched_host="",service="",consumer="",node="54.237.103.220",upstream_addr="54.237.103.220:80",route_name="extra-label"} 1
```
### Monitor TCP/UDP Traffic with Prometheus
The following example demonstrates how to collect TCP/UDP traffic metrics in APISIX.
Include the following configurations in `config.yaml` to enable stream proxy and `prometheus` Plugin for stream proxy. Reload APISIX for changes to take effect:
```yaml title="conf/config.yaml"
apisix:
proxy_mode: http&stream # Enable both L4 & L7 proxies
stream_proxy: # Configure L4 proxy
tcp:
- 9100 # Set TCP proxy listening port
udp:
- 9200 # Set UDP proxy listening port
stream_plugins:
- prometheus # Enable prometheus for stream proxy
```
Create a Stream Route with the `prometheus` Plugin:
```shell
curl "http://127.0.0.1:9180/apisix/admin/stream_routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
Include the following configurations in `config.yaml` to enable stream proxy and enable `prometheus` Plugin for stream proxy. Reload APISIX for changes to take effect:
"plugins": {
"prometheus":{}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Stream Route to verify:
```shell
curl -i "http://127.0.0.1:9100"
```
You should see an `HTTP/1.1 200 OK` response.
Send a request to the APISIX Prometheus metrics endpoint:
```shell
curl "http://127.0.0.1:9091/apisix/prometheus/metrics"
```
You should see an output similar to the following:
```text
# HELP apisix_stream_connection_total Total number of connections handled per Stream Route in APISIX
# TYPE apisix_stream_connection_total counter
apisix_stream_connection_total{route="1"} 1
```

View File

@@ -0,0 +1,379 @@
---
title: proxy-cache
keywords:
- Apache APISIX
- API Gateway
- Proxy Cache
description: The proxy-cache Plugin caches responses based on keys, supporting disk and memory caching for GET, POST, and HEAD requests, enhancing API performance.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/proxy-cache" />
</head>
## Description
The `proxy-cache` Plugin provides the capability to cache responses based on a cache key. The Plugin supports both disk-based and memory-based caching options to cache for [GET](https://anything.org/learn/serving-over-http/#get-request), [POST](https://anything.org/learn/serving-over-http/#post-request), and [HEAD](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/HEAD) requests.
Responses can be conditionally cached based on request HTTP methods, response status codes, request header values, and more.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|--------------------|----------------|----------|---------------------------|-------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| cache_strategy | string | False | disk | ["disk","memory"] | Caching strategy. Cache on disk or in memory. |
| cache_zone | string | False | disk_cache_one | | Cache zone used with the caching strategy. The value should match one of the cache zones defined in the [configuration files](#static-configurations) and should correspond to the caching strategy. For example, when using the in-memory caching strategy, you should use an in-memory cache zone. |
| cache_key | array[string] | False | ["$host", "$request_uri"] | | Key to use for caching. Support [NGINX variables](https://nginx.org/en/docs/varindex.html) and constant strings in values. Variables should be prefixed with a `$` sign. |
| cache_bypass | array[string] | False | | | One or more parameters to parse value from, such that if any of the values is not empty and is not equal to `0`, response will not be retrieved from cache. Support [NGINX variables](https://nginx.org/en/docs/varindex.html) and constant strings in values. Variables should be prefixed with a `$` sign. |
| cache_method | array[string] | False | ["GET", "HEAD"] | ["GET", "POST", "HEAD"] | Request methods of which the response should be cached. |
| cache_http_status | array[integer] | False | [200, 301, 404] | [200, 599] | Response HTTP status codes of which the response should be cached. |
| hide_cache_headers | boolean | False | false | | If true, hide `Expires` and `Cache-Control` response headers. |
| cache_control | boolean | False | false | | If true, comply with `Cache-Control` behavior in the HTTP specification. Only valid for in-memory strategy. |
| no_cache | array[string] | False | | | One or more parameters to parse value from, such that if any of the values is not empty and is not equal to `0`, response will not be cached. Support [NGINX variables](https://nginx.org/en/docs/varindex.html) and constant strings in values. Variables should be prefixed with a `$` sign. |
| cache_ttl | integer | False | 300 | >=1 | Cache time to live (TTL) in seconds when caching in memory. To adjust the TTL when caching on disk, update `cache_ttl` in the [configuration files](#static-configurations). The TTL value is evaluated in conjunction with the values in the response headers [`Cache-Control`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control) and [`Expires`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Expires) received from the Upstream service. |
## Static Configurations
By default, values such as `cache_ttl` when caching on disk and cache `zones` are pre-configured in the [default configuration](https://github.com/apache/apisix/blob/master/apisix/cli/config.lua).
To customize these values, add the corresponding configurations to `config.yaml`. For example:
```yaml
apisix:
proxy_cache:
cache_ttl: 10s # default cache TTL used when caching on disk, only if none of the `Expires`
# and `Cache-Control` response headers is present, or if APISIX returns
# `502 Bad Gateway` or `504 Gateway Timeout` due to unavailable upstreams
zones:
- name: disk_cache_one
memory_size: 50m
disk_size: 1G
disk_path: /tmp/disk_cache_one
cache_levels: 1:2
# - name: disk_cache_two
# memory_size: 50m
# disk_size: 1G
# disk_path: "/tmp/disk_cache_two"
# cache_levels: "1:2"
- name: memory_cache
memory_size: 50m
```
Reload APISIX for changes to take effect.
## Examples
The examples below demonstrate how you can configure `proxy-cache` for different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Cache Data on Disk
On-disk caching strategy offers the advantages of data persistency when system restarts and having larger storage capacity compared to in-memory cache. It is suitable for applications that prioritize durability and can tolerate slightly larger cache access latency.
The following example demonstrates how you can use `proxy-cache` Plugin on a Route to cache data on disk.
When using the on-disk caching strategy, the cache TTL is determined by value from the response header `Expires` or `Cache-Control`. If none of these headers is present or if APISIX returns `502 Bad Gateway` or `504 Gateway Timeout` due to unavailable Upstreams, the cache TTL defaults to the value configured in the [configuration files](#static-configuration).
Create a Route with the `proxy-cache` Plugin to cache data on disk:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "proxy-cache-route",
"uri": "/anything",
"plugins": {
"proxy-cache": {
"cache_strategy": "disk"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
```
Send a request to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should see an `HTTP/1.1 200 OK` response with the following header, showing the Plugin is successfully enabled:
```text
Apisix-Cache-Status: MISS
```
As there is no cache available before the first response, `Apisix-Cache-Status: MISS` is shown.
Send the same request again within the cache TTL window. You should see an `HTTP/1.1 200 OK` response with the following headers, showing the cache is hit:
```text
Apisix-Cache-Status: HIT
```
Wait for the cache to expire after the TTL and send the same request again. You should see an `HTTP/1.1 200 OK` response with the following headers, showing the cache has expired:
```text
Apisix-Cache-Status: EXPIRED
```
### Cache Data in Memory
In-memory caching strategy offers the advantage of low-latency access to the cached data, as retrieving data from RAM is faster than retrieving data from disk storage. It also works well for storing temporary data that does not need to be persisted long-term, allowing for efficient caching of frequently changing data.
The following example demonstrates how you can use `proxy-cache` Plugin on a Route to cache data in memory.
Create a Route with `proxy-cache` and configure it to use memory-based caching:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "proxy-cache-route",
"uri": "/anything",
"plugins": {
"proxy-cache": {
"cache_strategy": "memory",
"cache_zone": "memory_cache",
"cache_ttl": 10
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
```
Send a request to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should see an `HTTP/1.1 200 OK` response with the following header, showing the Plugin is successfully enabled:
```text
Apisix-Cache-Status: MISS
```
As there is no cache available before the first response, `Apisix-Cache-Status: MISS` is shown.
Send the same request again within the cache TTL window. You should see an `HTTP/1.1 200 OK` response with the following headers, showing the cache is hit:
```text
Apisix-Cache-Status: HIT
```
### Cache Responses Conditionally
The following example demonstrates how you can configure the `proxy-cache` Plugin to conditionally cache responses.
Create a Route with the `proxy-cache` Plugin and configure the `no_cache` attribute, such that if at least one of the values of the URL parameter `no_cache` and header `no_cache` is not empty and is not equal to `0`, the response will not be cached:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "proxy-cache-route",
"uri": "/anything",
"plugins": {
"proxy-cache": {
"no_cache": ["$arg_no_cache", "$http_no_cache"]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
```
Send a few requests to the Route with the URL parameter `no_cache` value indicating cache bypass:
```shell
curl -i "http://127.0.0.1:9080/anything?no_cache=1"
```
You should receive `HTTP/1.1 200 OK` responses for all requests and observe the following header every time:
```text
Apisix-Cache-Status: EXPIRED
```
Send a few other requests to the Route with the URL parameter `no_cache` value being zero:
```shell
curl -i "http://127.0.0.1:9080/anything?no_cache=0"
```
You should receive `HTTP/1.1 200 OK` responses for all requests and start seeing the cache being hit:
```text
Apisix-Cache-Status: HIT
```
You can also specify the value in the `no_cache` header as such:
```shell
curl -i "http://127.0.0.1:9080/anything" -H "no_cache: 1"
```
The response should not be cached:
```text
Apisix-Cache-Status: EXPIRED
```
### Retrieve Responses from Cache Conditionally
The following example demonstrates how you can configure the `proxy-cache` Plugin to conditionally retrieve responses from cache.
Create a Route with the `proxy-cache` Plugin and configure the `cache_bypass` attribute, such that if at least one of the values of the URL parameter `bypass` and header `bypass` is not empty and is not equal to `0`, the response will not be retrieved from the cache:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "proxy-cache-route",
"uri": "/anything",
"plugins": {
"proxy-cache": {
"cache_bypass": ["$arg_bypass", "$http_bypass"]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
```
Send a request to the Route with the URL parameter `bypass` value indicating cache bypass:
```shell
curl -i "http://127.0.0.1:9080/anything?bypass=1"
```
You should see an `HTTP/1.1 200 OK` response with the following header:
```text
Apisix-Cache-Status: BYPASS
```
Send another request to the Route with the URL parameter `bypass` value being zero:
```shell
curl -i "http://127.0.0.1:9080/anything?bypass=0"
```
You should see an `HTTP/1.1 200 OK` response with the following header:
```text
Apisix-Cache-Status: MISS
```
You can also specify the value in the `bypass` header as such:
```shell
curl -i "http://127.0.0.1:9080/anything" -H "bypass: 1"
```
The cache should be bypassed:
```text
Apisix-Cache-Status: BYPASS
```
### Cache for 502 and 504 Error Response Code
When the Upstream services return server errors in the 500 range, `proxy-cache` Plugin will cache the responses if and only if the returned status is `502 Bad Gateway` or `504 Gateway Timeout`.
The following example demonstrates the behavior of `proxy-cache` Plugin when the Upstream service returns `504 Gateway Timeout`.
Create a Route with the `proxy-cache` Plugin and configure a dummy Upstream service:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "proxy-cache-route",
"uri": "/timeout",
"plugins": {
"proxy-cache": { }
},
"upstream": {
"type": "roundrobin",
"nodes": {
"12.34.56.78": 1
}
}
}'
```
Generate a few requests to the Route:
```shell
seq 4 | xargs -I{} curl -I "http://127.0.0.1:9080/timeout"
```
You should see a response similar to the following:
```text
HTTP/1.1 504 Gateway Time-out
...
Apisix-Cache-Status: MISS
HTTP/1.1 504 Gateway Time-out
...
Apisix-Cache-Status: HIT
HTTP/1.1 504 Gateway Time-out
...
Apisix-Cache-Status: HIT
HTTP/1.1 504 Gateway Time-out
...
Apisix-Cache-Status: HIT
```
However, if the Upstream services returns `503 Service Temporarily Unavailable`, the response will not be cached.

View File

@@ -0,0 +1,103 @@
---
title: proxy-control
keywords:
- Apache APISIX
- API Gateway
- Proxy Control
description: This document contains information about the Apache APISIX proxy-control Plugin, you can use it to control the behavior of the NGINX proxy dynamically.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The proxy-control Plugin dynamically controls the behavior of the NGINX proxy.
:::info IMPORTANT
This Plugin requires APISIX to run on [APISIX-Runtime](../FAQ.md#how-do-i-build-the-apisix-runtime-environment). See [apisix-build-tools](https://github.com/api7/apisix-build-tools) for more info.
:::
## Attributes
| Name | Type | Required | Default | Description |
| ----------------- | ------- | -------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| request_buffering | boolean | False | true | When set to `true`, the Plugin dynamically sets the [`proxy_request_buffering`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering) directive. |
## Enable Plugin
The example below enables the Plugin on a specific Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/upload",
"plugins": {
"proxy-control": {
"request_buffering": false
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```
## Example usage
The example below shows the use case of uploading a big file:
```shell
curl -i http://127.0.0.1:9080/upload -d @very_big_file
```
It's expected to not find a message "a client request body is buffered to a temporary file" in the error log.
## Delete Plugin
To remove the `proxy-control` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/upload",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,145 @@
---
title: proxy-mirror
keywords:
- Apache APISIX
- API Gateway
- Proxy Mirror
description: The proxy-mirror Plugin duplicates ingress traffic to APISIX and forwards them to a designated Upstream without interrupting the regular services.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/proxy-mirror" />
</head>
## Description
The `proxy-mirror` Plugin duplicates ingress traffic to APISIX and forwards them to a designated upstream, without interrupting the regular services. You can configure the Plugin to mirror all traffic or only a portion. The mechanism benefits a few use cases, including troubleshooting, security inspection, analytics, and more.
Note that APISIX ignores any response from the Upstream host receiving mirrored traffic.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|--------------|--------|----------|---------|--------------|---------------------------------------------------------------------------------------------------------------------------|
| host | string | True | | | Address of the host to forward the mirrored traffic to. The address should contain the scheme but without the path, such as `http://127.0.0.1:8081`. |
| path | string | False | | | Path of the host to forward the mirrored traffic to. If unspecified, default to the current URI path of the Route. Not applicable if the Plugin is mirroring gRPC traffic. |
| path_concat_mode | string | False | replace | ["replace", "prefix"] | Concatenation mode when `path` is specified. When set to `replace`, the configured `path` would be directly used as the path of the host to forward the mirrored traffic to. When set to `prefix`, the path to forward to would be the configured `path`, appended by the requested URI path of the Route. Not applicable if the Plugin is mirroring gRPC traffic. |
| sample_ratio | number | False | 1 | [0.00001, 1] | Ratio of the requests that will be mirrored. By default, all traffic are mirrored. |
## Static Configurations
By default, timeout values for the Plugin are pre-configured in the [default configuration](https://github.com/apache/apisix/blob/master/apisix/cli/config.lua).
To customize these values, add the corresponding configurations to `config.yaml`. For example:
```yaml
plugin_attr:
proxy-mirror:
timeout:
connect: 60s
read: 60s
send: 60s
```
Reload APISIX for changes to take effect.
## Examples
The examples below demonstrate how to configure `proxy-mirror` for different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Mirror Partial Traffic
The following example demonstrates how you can configure `proxy-mirror` to mirror 50% of the traffic to a Route and forward them to another Upstream service.
Start a sample NGINX server for receiving mirrored traffic:
```shell
docker run -p 8081:80 --name nginx nginx
```
You should see NGINX access log and error log on the terminal session.
Open a new terminal session and create a Route with `proxy-mirror` to mirror 50% of the traffic:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "traffic-mirror-route",
"uri": "/get",
"plugins": {
"proxy-mirror": {
"host": "http://127.0.0.1:8081",
"sample_ratio": 0.5
}
},
"upstream": {
"nodes": {
"httpbin.org": 1
},
"type": "roundrobin"
}
}'
```
Send Generate a few requests to the Route:
```shell
curl -i "http://127.0.0.1:9080/get"
```
You should receive `HTTP/1.1 200 OK` responses for all requests.
Navigating back to the NGINX terminal session, you should see a number of access log entries, roughly half the number of requests generated:
```text
172.17.0.1 - - [29/Jan/2024:23:11:01 +0000] "GET /get HTTP/1.1" 404 153 "-" "curl/7.64.1" "-"
```
This suggests APISIX has mirrored the request to the NGINX server. Here, the HTTP response status is `404` since the sample NGINX server does not implement the Route.
### Configure Mirroring Timeouts
The following example demonstrates how you can update the default connect, read, and send timeouts for the Plugin. This could be useful when mirroring traffic to a very slow backend service.
As the request mirroring was implemented as sub-requests, excessive delays in the sub-requests could lead to the blocking of the original requests. By default, the connect, read, and send timeouts are set to 60 seconds. To update these values, you can configure them in the `plugin_attr` section of the configuration file as such:
```yaml title="conf/config.yaml"
plugin_attr:
proxy-mirror:
timeout:
connect: 2000ms
read: 2000ms
send: 2000ms
```
Reload APISIX for changes to take effect.

View File

@@ -0,0 +1,509 @@
---
title: proxy-rewrite
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Proxy Rewrite
- proxy-rewrite
description: The proxy-rewrite Plugin offers options to rewrite requests that APISIX forwards to Upstream services. With this plugin, you can modify the HTTP methods, request destination Upstream addresses, request headers, and more.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/proxy-rewrite" />
</head>
## Description
The `proxy-rewrite` Plugin offers options to rewrite requests that APISIX forwards to Upstream services. With this plugin, you can modify the HTTP methods, request destination Upstream addresses, request headers, and more.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|-----------------------------|---------------|----------|---------|----------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| uri | string | False | | | New Upstream URI path. Value supports [NGINX variables](https://nginx.org/en/docs/http/ngx_http_core_module.html). For example, `$arg_name`. |
| method | string | False | | ["GET", "POST", "PUT", "HEAD", "DELETE", "OPTIONS","MKCOL", "COPY", "MOVE", "PROPFIND", "PROPFIND","LOCK", "UNLOCK", "PATCH", "TRACE"] | HTTP method to rewrite requests to use. |
| regex_uri | array[string] | False | | | Regular expressions used to match the URI path from client requests and compose a new Upstream URI path. When both `uri` and `regex_uri` are configured, `uri` has a higher priority. The array should contain one or more **key-value pairs**, with the key being the regular expression to match URI against and value being the new Upstream URI path. For example, with `["^/iresty/(. *)/(. *)", "/$1-$2", ^/theothers/*", "/theothers"]`, if a request is originally sent to `/iresty/hello/world`, the Plugin will rewrite the Upstream URI path to `/iresty/hello-world`; if a request is originally sent to `/theothers/hello/world`, the Plugin will rewrite the Upstream URI path to `/theothers`. |
| host | string | False | | | Set [`Host`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Host) request header. |
| headers | object | False | | | Header actions to be executed. Can be set to objects of action verbs `add`, `remove`, and/or `set`; or an object consisting of headers to be `set`. When multiple action verbs are configured, actions are executed in the order of `add`, `remove`, and `set`. |
| headers.add | object | False | | | Headers to append to requests. If a header already present in the request, the header value will be appended. Header value could be set to a constant, one or more [NGINX variables](https://nginx.org/en/docs/http/ngx_http_core_module.html), or the matched result of `regex_uri` using variables such as `$1-$2-$3`. |
| headers.set | object | False | | | Headers to set to requests. If a header already present in the request, the header value will be overwritten. Header value could be set to a constant, one or more [NGINX variables](https://nginx.org/en/docs/http/ngx_http_core_module.html), or the matched result of `regex_uri` using variables such as `$1-$2-$3`. Should not be used to set `Host`. |
| headers.remove | array[string] | False | | | Headers to remove from requests.
| use_real_request_uri_unsafe | boolean | False | false | | If true, bypass URI normalization and allow for the full original request URI. Enabling this option is considered unsafe. |
## Examples
The examples below demonstrate how you can configure `proxy-rewrite` on a Route in different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Rewrite Host Header
The following example demonstrates how you can modify the `Host` header in a request. Note that you should not use `headers.set` to set the `Host` header.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "proxy-rewrite-route",
"methods": ["GET"],
"uri": "/headers",
"plugins": {
"proxy-rewrite": {
"host": "myapisix.demo"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to `/headers` to check all the request headers sent to upstream:
```shell
curl "http://127.0.0.1:9080/headers"
```
You should see a response similar to the following:
```text
{
"headers": {
"Accept": "*/*",
"Host": "myapisix.demo",
"User-Agent": "curl/8.2.1",
"X-Amzn-Trace-Id": "Root=1-64fef198-29da0970383150175bd2d76d",
"X-Forwarded-Host": "127.0.0.1"
}
}
```
### Rewrite URI And Set Headers
The following example demonstrates how you can rewrite the request Upstream URI and set additional header values. If the same headers present in the client request, the corresponding header values set in the Plugin will overwrite the values present in the client request.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "proxy-rewrite-route",
"methods": ["GET"],
"uri": "/",
"plugins": {
"proxy-rewrite": {
"uri": "/anything",
"headers": {
"set": {
"X-Api-Version": "v1",
"X-Api-Engine": "apisix"
}
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to verify:
```shell
curl "http://127.0.0.1:9080/" -H '"X-Api-Version": "v2"'
```
You should see a response similar to the following:
```text
{
"args": {},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Host": "httpbin.org",
"User-Agent": "curl/8.2.1",
"X-Amzn-Trace-Id": "Root=1-64fed73a-59cd3bd640d76ab16c97f1f1",
"X-Api-Engine": "apisix",
"X-Api-Version": "v1",
"X-Forwarded-Host": "127.0.0.1"
},
"json": null,
"method": "GET",
"origin": "::1, 103.248.35.179",
"url": "http://localhost/anything"
}
```
Note that both headers present and the header value of `X-Api-Version` configured in the Plugin overwrites the header value passed in the request.
### Rewrite URI And Append Headers
The following example demonstrates how you can rewrite the request Upstream URI and append additional header values. If the same headers present in the client request, their headers values will append to the configured header values in the plugin.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "proxy-rewrite-route",
"methods": ["GET"],
"uri": "/",
"plugins": {
"proxy-rewrite": {
"uri": "/headers",
"headers": {
"add": {
"X-Api-Version": "v1",
"X-Api-Engine": "apisix"
}
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to verify:
```shell
curl "http://127.0.0.1:9080/" -H '"X-Api-Version": "v2"'
```
You should see a response similar to the following:
```text
{
"headers": {
"Accept": "*/*",
"Host": "httpbin.org",
"User-Agent": "curl/8.2.1",
"X-Amzn-Trace-Id": "Root=1-64fed73a-59cd3bd640d76ab16c97f1f1",
"X-Api-Engine": "apisix",
"X-Api-Version": "v1,v2",
"X-Forwarded-Host": "127.0.0.1"
}
}
```
Note that both headers present and the header value of `X-Api-Version` configured in the Plugin is appended by the header value passed in the request.
### Remove Existing Header
The following example demonstrates how you can remove an existing header `User-Agent`.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "proxy-rewrite-route",
"methods": ["GET"],
"uri": "/headers",
"plugins": {
"proxy-rewrite": {
"headers": {
"remove":[
"User-Agent"
]
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to verify if the specified header is removed:
```shell
curl "http://127.0.0.1:9080/headers"
```
You should see a response similar to the following, where the `User-Agent` header is not present:
```text
{
"headers": {
"Accept": "*/*",
"Host": "httpbin.org",
"X-Amzn-Trace-Id": "Root=1-64fef302-07f2b13e0eb006ba776ad91d",
"X-Forwarded-Host": "127.0.0.1"
}
}
```
### Rewrite URI Using RegEx
The following example demonstrates how you can parse text from the original Upstream URI path and use them to compose a new Upstream URI path. In this example, APISIX is configured to forward all requests from `/test/user/agent` to `/user-agent`.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "proxy-rewrite-route",
"uri": "/test/*",
"plugins": {
# highlight-start
"proxy-rewrite": {
"regex_uri": ["^/test/(.*)/(.*)", "/$1-$2"]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to `/test/user/agent` to check if it is redirected to `/user-agent`:
```shell
curl "http://127.0.0.1:9080/test/user/agent"
```
You should see a response similar to the following:
```text
{
"user-agent": "curl/8.2.1"
}
```
### Add URL Parameters
The following example demonstrates how you can add URL parameters to the request.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "proxy-rewrite-route",
"methods": ["GET"],
"uri": "/get",
"plugins": {
"proxy-rewrite": {
"uri": "/get?arg1=apisix&arg2=plugin"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to verify if the URL parameters are also forwarded to upstream:
```shell
curl "http://127.0.0.1:9080/get"
```
You should see a response similar to the following:
```text
{
"args": {
"arg1": "apisix",
"arg2": "plugin"
},
"headers": {
"Accept": "*/*",
"Host": "127.0.0.1",
"User-Agent": "curl/8.2.1",
"X-Amzn-Trace-Id": "Root=1-64fef6dc-2b0e09591db7353a275cdae4",
"X-Forwarded-Host": "127.0.0.1"
},
"origin": "127.0.0.1, 103.248.35.148",
# highlight-next-line
"url": "http://127.0.0.1/get?arg1=apisix&arg2=plugin"
}
```
### Rewrite HTTP Method
The following example demonstrates how you can rewrite a GET request into a POST request.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "proxy-rewrite-route",
"methods": ["GET"],
"uri": "/get",
"plugins": {
"proxy-rewrite": {
"uri": "/anything",
"method":"POST"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a GET request to `/get` to verify if it is transformed into a POST request to `/anything`:
```shell
curl "http://127.0.0.1:9080/get"
```
You should see a response similar to the following:
```text
{
"args": {},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Host": "127.0.0.1",
"User-Agent": "curl/8.2.1",
"X-Amzn-Trace-Id": "Root=1-64fef7de-0c63387645353998196317f2",
"X-Forwarded-Host": "127.0.0.1"
},
"json": null,
"method": "POST",
"origin": "::1, 103.248.35.179",
"url": "http://localhost/anything"
}
```
### Forward Consumer Names to Upstream
The following example demonstrates how you can forward the name of consumers who authenticates successfully to Upstream services. As an example, you will be using `key-auth` as the authentication method.
Create a Consumer `JohnDoe`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "JohnDoe"
}'
```
Create `key-auth` credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/JohnDoe/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-key-auth",
"plugins": {
"key-auth": {
"key": "john-key"
}
}
}'
```
Next, create a Route with key authentication enabled, configure `proxy-rewrite` to add Consumer name to the header, and remove the authentication key so that it is not visible to the Upstream service:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "consumer-restricted-route",
"uri": "/get",
"plugins": {
"key-auth": {},
"proxy-rewrite": {
"headers": {
"set": {
"X-Apisix-Consumer": "$consumer_name"
},
"remove": [ "Apikey" ]
}
}
},
"upstream" : {
"nodes": {
"httpbin.org":1
}
}
}'
```
Send a request to the Route as Consumer `JohnDoe`:
```shell
curl -i "http://127.0.0.1:9080/get" -H 'apikey: john-key'
```
You should receive an `HTTP/1.1 200 OK` response with the following body:
```text
{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "127.0.0.1",
"User-Agent": "curl/8.4.0",
"X-Amzn-Trace-Id": "Root=1-664b01a6-2163c0156ed4bff51d87d877",
"X-Apisix-Consumer": "JohnDoe",
"X-Forwarded-Host": "127.0.0.1"
},
"origin": "172.19.0.1, 203.12.12.12",
"url": "http://127.0.0.1/get"
}
```
Send another request to the Route without the valid credential:
```shell
curl -i "http://127.0.0.1:9080/get"
```
You should receive an `HTTP/1.1 403 Forbidden` response.

View File

@@ -0,0 +1,245 @@
---
title: public-api
keywords:
- Apache APISIX
- API Gateway
- Public API
description: The public-api plugin exposes an internal API endpoint, making it publicly accessible. One of the primary use cases of this plugin is to expose internal endpoints created by other plugins.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/public-api" />
</head>
## Description
The `public-api` Plugin exposes an internal API endpoint, making it publicly accessible. One of the primary use cases of this Plugin is to expose internal endpoints created by other Plugins.
## Attributes
| Name | Type | Required | Default | Valid Values | Description |
|---------|-----------|----------|---------|--------------|-------------|
| uri | string | False | | | Internal endpoint to expose. If not configured, expose the Route URI. |
## Examples
The examples below demonstrate how you can configure `public-api` in different scenarios.
### Expose Prometheus Metrics at Custom Endpoint
The following example demonstrates how you can disable the Prometheus export server that, by default, exposes an endpoint on port `9091`, and expose APISIX Prometheus metrics on a new public API endpoint on port `9080`, which APISIX uses to listen to other client requests.
You will also configure the Route such that the internal endpoint `/apisix/prometheus/metrics` is exposed at a custom endpoint.
:::caution
If a large quantity of metrics is being collected, the Plugin could take up a significant amount of CPU resources for metric computations and negatively impact the processing of regular requests.
To address this issue, APISIX uses [privileged agent](https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/process.md#enable_privileged_agent) and offloads the metric computations to a separate process. This optimization applies automatically if you use the metric endpoint configured under `plugin_attr.prometheus.export_addr` in the configuration file. If you expose the metric endpoint with the `public-api` Plugin, you will not benefit from this optimization.
:::
Disable the Prometheus export server in the configuration file and reload APISIX for changes to take effect:
```yaml title="conf/config.yaml"
plugin_attr:
prometheus:
enable_export_server: false
```
Next, create a Route with the `public-api` Plugin and expose a public API endpoint for APISIX metrics. You should set the Route `uri` to the custom endpoint path and set the Plugin `uri` to the internal endpoint to be exposed.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "prometheus-metrics",
"uri": "/prometheus_metrics",
"plugins": {
"public-api": {
"uri": "/apisix/prometheus/metrics"
}
}
}'
```
Send a request to the custom metrics endpoint:
```shell
curl "http://127.0.0.1:9080/prometheus_metrics"
```
You should see an output similar to the following:
```text
# HELP apisix_http_requests_total The total number of client requests since APISIX started
# TYPE apisix_http_requests_total gauge
apisix_http_requests_total 1
# HELP apisix_nginx_http_current_connections Number of HTTP connections
# TYPE apisix_nginx_http_current_connections gauge
apisix_nginx_http_current_connections{state="accepted"} 1
apisix_nginx_http_current_connections{state="active"} 1
apisix_nginx_http_current_connections{state="handled"} 1
apisix_nginx_http_current_connections{state="reading"} 0
apisix_nginx_http_current_connections{state="waiting"} 0
apisix_nginx_http_current_connections{state="writing"} 1
...
```
### Expose Batch Requests Endpoint
The following example demonstrates how you can use the `public-api` Plugin to expose an endpoint for the `batch-requests` Plugin, which is used for assembling multiple requests into one single request before sending them to the gateway.
[//]:<TODO: update link to batch-requests plugin doc when it is available>
Create a sample Route to httpbin's `/anything` endpoint for verification purpose:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "httpbin-anything",
"uri": "/anything",
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Create a Route with `public-api` Plugin and set the Route `uri` to the internal endpoint to be exposed:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "batch-requests",
"uri": "/apisix/batch-requests",
"plugins": {
"public-api": {}
}
}'
```
Send a pipelined request consisting of a GET and a POST request to the exposed batch requests endpoint:
```shell
curl "http://127.0.0.1:9080/apisix/batch-requests" -X POST -d '
{
"pipeline": [
{
"method": "GET",
"path": "/anything"
},
{
"method": "POST",
"path": "/anything",
"body": "a post request"
}
]
}'
```
You should receive responses from both requests, similar to the following:
```json
[
{
"reason": "OK",
"body": "{\n \"args\": {}, \n \"data\": \"\", \n \"files\": {}, \n \"form\": {}, \n \"headers\": {\n \"Accept\": \"*/*\", \n \"Host\": \"127.0.0.1\", \n \"User-Agent\": \"curl/8.6.0\", \n \"X-Amzn-Trace-Id\": \"Root=1-67b6e33b-5a30174f5534287928c54ca9\", \n \"X-Forwarded-Host\": \"127.0.0.1\"\n }, \n \"json\": null, \n \"method\": \"GET\", \n \"origin\": \"192.168.107.1, 43.252.208.84\", \n \"url\": \"http://127.0.0.1/anything\"\n}\n",
"headers": {
...
},
"status": 200
},
{
"reason": "OK",
"body": "{\n \"args\": {}, \n \"data\": \"a post request\", \n \"files\": {}, \n \"form\": {}, \n \"headers\": {\n \"Accept\": \"*/*\", \n \"Content-Length\": \"14\", \n \"Host\": \"127.0.0.1\", \n \"User-Agent\": \"curl/8.6.0\", \n \"X-Amzn-Trace-Id\": \"Root=1-67b6e33b-0eddcec07f154dac0d77876f\", \n \"X-Forwarded-Host\": \"127.0.0.1\"\n }, \n \"json\": null, \n \"method\": \"POST\", \n \"origin\": \"192.168.107.1, 43.252.208.84\", \n \"url\": \"http://127.0.0.1/anything\"\n}\n",
"headers": {
...
},
"status": 200
}
]
```
If you would like to expose the batch requests endpoint at a custom endpoint, create a Route with `public-api` Plugin as such. You should set the Route `uri` to the custom endpoint path and set the plugin `uri` to the internal endpoint to be exposed.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "batch-requests",
"uri": "/batch-requests",
"plugins": {
"public-api": {
"uri": "/apisix/batch-requests"
}
}
}'
```
The batch requests endpoint should now be exposed as `/batch-requests`, instead of `/apisix/batch-requests`.
Send a pipelined request consisting of a GET and a POST request to the exposed batch requests endpoint:
```shell
curl "http://127.0.0.1:9080/batch-requests" -X POST -d '
{
"pipeline": [
{
"method": "GET",
"path": "/anything"
},
{
"method": "POST",
"path": "/anything",
"body": "a post request"
}
]
}'
```
You should receive responses from both requests, similar to the following:
```json
[
{
"reason": "OK",
"body": "{\n \"args\": {}, \n \"data\": \"\", \n \"files\": {}, \n \"form\": {}, \n \"headers\": {\n \"Accept\": \"*/*\", \n \"Host\": \"127.0.0.1\", \n \"User-Agent\": \"curl/8.6.0\", \n \"X-Amzn-Trace-Id\": \"Root=1-67b6e33b-5a30174f5534287928c54ca9\", \n \"X-Forwarded-Host\": \"127.0.0.1\"\n }, \n \"json\": null, \n \"method\": \"GET\", \n \"origin\": \"192.168.107.1, 43.252.208.84\", \n \"url\": \"http://127.0.0.1/anything\"\n}\n",
"headers": {
...
},
"status": 200
},
{
"reason": "OK",
"body": "{\n \"args\": {}, \n \"data\": \"a post request\", \n \"files\": {}, \n \"form\": {}, \n \"headers\": {\n \"Accept\": \"*/*\", \n \"Content-Length\": \"14\", \n \"Host\": \"127.0.0.1\", \n \"User-Agent\": \"curl/8.6.0\", \n \"X-Amzn-Trace-Id\": \"Root=1-67b6e33b-0eddcec07f154dac0d77876f\", \n \"X-Forwarded-Host\": \"127.0.0.1\"\n }, \n \"json\": null, \n \"method\": \"POST\", \n \"origin\": \"192.168.107.1, 43.252.208.84\", \n \"url\": \"http://127.0.0.1/anything\"\n}\n",
"headers": {
...
},
"status": 200
}
]
```

View File

@@ -0,0 +1,202 @@
---
title: real-ip
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Real IP
description: The real-ip plugin allows Apache APISIX to set the client's real IP by the IP address passed in the HTTP header or HTTP query string.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/real-ip" />
</head>
## Description
The `real-ip` Plugin allows APISIX to set the client's real IP by the IP address passed in the HTTP header or HTTP query string. This is particularly useful when APISIX is behind a reverse proxy since the proxy could act as the request-originating client otherwise.
The Plugin is functionally similar to NGINX's [ngx_http_realip_module](https://nginx.org/en/docs/http/ngx_http_realip_module.html) but offers more flexibility.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|-----------|---------|----------|---------|----------------|---------------|
| source | string | True | | |A built-in [APISIX variable](https://apisix.apache.org/docs/apisix/apisix-variable/) or [NGINX variable](https://nginx.org/en/docs/varindex.html), such as `http_x_forwarded_for` or `arg_realip`. The variable value should be a valid IP address that represents the client's real IP address, with an optional port.|
| trusted_addresses | array[string] | False | | array of IPv4 or IPv6 addresses (CIDR notation acceptable) | Trusted addresses that are known to send correct replacement addresses. This configuration sets the [`set_real_ip_from`](https://nginx.org/en/docs/http/ngx_http_realip_module.html#set_real_ip_from) directive. |
| recursive | boolean | False | False | | If false, replace the original client address that matches one of the trusted addresses by the last address sent in the configured `source`.<br />If true, replace the original client address that matches one of the trusted addresses by the last non-trusted address sent in the configured `source`. |
:::note
If the address specified in `source` is missing or invalid, the Plugin would not change the client address.
:::
## Examples
The examples below demonstrate how you can configure `real-ip` in different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Obtain Real Client Address From URI Parameter
The following example demonstrates how to update the client IP address with a URI parameter.
Create a Route as follows. You should configure `source` to obtain value from the URL parameter `realip` using [APISIX variable](https://apisix.apache.org/docs/apisix/apisix-variable/) or [NGINX variable](https://nginx.org/en/docs/varindex.html). Use the `response-rewrite` Plugin to set response headers to verify if the client IP and port were actually updated.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "real-ip-route",
"uri": "/get",
"plugins": {
"real-ip": {
"source": "arg_realip",
"trusted_addresses": ["127.0.0.0/24"]
},
"response-rewrite": {
"headers": {
"remote_addr": "$remote_addr",
"remote_port": "$remote_port"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route with real IP and port in the URL parameter:
```shell
curl -i "http://127.0.0.1:9080/get?realip=1.2.3.4:9080"
```
You should see the response includes the following header:
```text
remote-addr: 1.2.3.4
remote-port: 9080
```
### Obtain Real Client Address From Header
The following example shows how to set the real client IP when APISIX is behind a reverse proxy, such as a load balancer when the proxy exposes the real client IP in the [`X-Forwarded-For`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For) header.
Create a Route as follows. You should configure `source` to obtain value from the request header `X-Forwarded-For` using [APISIX variable](https://apisix.apache.org/docs/apisix/apisix-variable/) or [NGINX variable](https://nginx.org/en/docs/varindex.html). Use the `response-rewrite` Plugin to set a response header to verify if the client IP was actually updated.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "real-ip-route",
"uri": "/get",
"plugins": {
"real-ip": {
"source": "http_x_forwarded_for",
"trusted_addresses": ["127.0.0.0/24"]
},
"response-rewrite": {
"headers": {
"remote_addr": "$remote_addr"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route:
```shell
curl -i "http://127.0.0.1:9080/get"
```
You should see a response including the following header:
```text
remote-addr: 10.26.3.19
```
The IP address should correspond to the IP address of the request-originating client.
### Obtain Real Client Address Behind Multiple Proxies
The following example shows how to get the real client IP when APISIX is behind multiple proxies, which causes [`X-Forwarded-For`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For) header to include a list of proxy IP addresses.
Create a Route as follows. You should configure `source` to obtain value from the request header `X-Forwarded-For` using [APISIX variable](https://apisix.apache.org/docs/apisix/apisix-variable/) or [NGINX variable](https://nginx.org/en/docs/varindex.html). Set `recursive` to `true` so that the original client address that matches one of the trusted addresses is replaced by the last non-trusted address sent in the configured `source`. Then, use the `response-rewrite` Plugin to set a response header to verify if the client IP was actually updated.
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "real-ip-route",
"uri": "/get",
"plugins": {
"real-ip": {
"source": "http_x_forwarded_for",
"recursive": true,
"trusted_addresses": ["192.128.0.0/16", "127.0.0.0/24"]
},
"response-rewrite": {
"headers": {
"remote_addr": "$remote_addr"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route:
```shell
curl -i "http://127.0.0.1:9080/get" \
-H "X-Forwarded-For: 127.0.0.2, 192.128.1.1, 127.0.0.1"
```
You should see a response including the following header:
```text
remote-addr: 127.0.0.2
```

View File

@@ -0,0 +1,172 @@
---
title: redirect
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Redirect
description: This document contains information about the Apache APISIX redirect Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `redirect` Plugin can be used to configure redirects.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|---------------------|---------------|----------|---------|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| http_to_https | boolean | False | false | | When set to `true` and the request is HTTP, it will be redirected to HTTPS with the same URI with a 301 status code. Note the querystring from the raw URI will also be contained in the Location header. |
| uri | string | False | | | URI to redirect to. Can contain Nginx variables. For example, `/test/index.html`, `$uri/index.html`, `${uri}/index.html`, `https://example.com/foo/bar`. If you refer to a variable name that doesn't exist, instead of throwing an error, it will treat it as an empty variable. |
| regex_uri | array[string] | False | | | Match the URL from client with a regular expression and redirect. If it doesn't match, the request will be forwarded to the Upstream. Only either of `uri` or `regex_uri` can be used at a time. For example, [" ^/iresty/(.*)/(.*)/(.*)", "/$1-$2-$3"], where the first element is the regular expression to match and the second element is the URI to redirect to. APISIX only support one `regex_uri` currently, so the length of the `regex_uri` array is `2`. |
| ret_code | integer | False | 302 | [200, ...] | HTTP response code. |
| encode_uri | boolean | False | false | | When set to `true` the URI in the `Location` header will be encoded as per [RFC3986](https://datatracker.ietf.org/doc/html/rfc3986). |
| append_query_string | boolean | False | false | | When set to `true`, adds the query string from the original request to the `Location` header. If the configured `uri` or `regex_uri` already contains a query string, the query string from the request will be appended to it with an `&`. Do not use this if you have already handled the query string (for example, with an Nginx variable `$request_uri`) to avoid duplicates. |
:::note
* Only one of `http_to_https`, `uri` and `regex_uri` can be configured.
* Only one of `http_to_https` and `append_query_string` can be configured.
* When enabling `http_to_https`, the ports in the redirect URL will pick a value in the following order (in descending order of priority)
* Read `plugin_attr.redirect.https_port` from the configuration file (`conf/config.yaml`).
* If `apisix.ssl` is enabled, read `apisix.ssl.listen` and select a port randomly from it.
* Use 443 as the default https port.
:::
## Enable Plugin
The example below shows how you can enable the `redirect` Plugin on a specific Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/test/index.html",
"plugins": {
"redirect": {
"uri": "/test/default.html",
"ret_code": 301
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:80": 1
}
}
}'
```
You can also use any built-in Nginx variables in the new URI:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/test",
"plugins": {
"redirect": {
"uri": "$uri/index.html",
"ret_code": 301
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:80": 1
}
}
}'
```
## Example usage
First, we configure the Plugin as mentioned above. We can then make a request and it will be redirected as shown below:
```shell
curl http://127.0.0.1:9080/test/index.html -i
```
```shell
HTTP/1.1 301 Moved Permanently
Date: Wed, 23 Oct 2019 13:48:23 GMT
Content-Type: text/html
Content-Length: 166
Connection: keep-alive
Location: /test/default.html
...
```
The response shows the response code and the `Location` header implying that the Plugin is in effect.
The example below shows how you can redirect HTTP to HTTPS:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"plugins": {
"redirect": {
"http_to_https": true
}
}
}'
```
To test this:
```shell
curl http://127.0.0.1:9080/hello -i
```
```
HTTP/1.1 301 Moved Permanently
...
Location: https://127.0.0.1:9443/hello
...
```
## Delete Plugin
To remove the `redirect` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/test/index.html",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:80": 1
}
}
}'
```

View File

@@ -0,0 +1,135 @@
---
title: referer-restriction
keywords:
- Apache APISIX
- API Gateway
- Referer restriction
description: This document contains information about the Apache APISIX referer-restriction Plugin, which can be used to restrict access to a Service or a Route by whitelisting/blacklisting the Referer request header.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `referer-restriction` Plugin can be used to restrict access to a Service or a Route by whitelisting/blacklisting the `Referer` request header.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|----------------|---------------|----------|----------------------------------|--------------|---------------------------------------------------------------------------------------------------|
| whitelist | array[string] | False | | | List of hostnames to whitelist. A hostname can start with `*` for wildcard. |
| blacklist | array[string] | False | | | List of hostnames to blacklist. A hostname can start with `*` for wildcard. |
| message | string | False | "Your referer host is not allowed" | [1, 1024] | Message returned when access is not allowed. |
| bypass_missing | boolean | False | false | | When set to `true`, bypasses the check when the `Referer` request header is missing or malformed. |
:::info IMPORTANT
Only one of `whitelist` or `blacklist` attribute must be specified. They cannot work together.
:::
## Enable Plugin
You can enable the Plugin on a specific Route or a Service as shown below:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"plugins": {
"referer-restriction": {
"bypass_missing": true,
"whitelist": [
"xx.com",
"*.xx.com"
]
}
}
}'
```
## Example usage
Once you have configured the Plugin as shown above, you can test it by setting `Referer: http://xx.com/x`:
```shell
curl http://127.0.0.1:9080/index.html -H 'Referer: http://xx.com/x'
```
```shell
HTTP/1.1 200 OK
...
```
Now, if you make a request with `Referer: http://yy.com/x`, the request will be blocked:
```shell
curl http://127.0.0.1:9080/index.html -H 'Referer: http://yy.com/x'
```
```shell
HTTP/1.1 403 Forbidden
...
{"message":"Your referer host is not allowed"}
```
Since we have set `bypass_missing` to `true`, a request without the `Referer` header will be successful as the check is skipped:
```shell
curl http://127.0.0.1:9080/index.html
```
```shell
HTTP/1.1 200 OK
...
```
## Delete Plugin
To remove the `referer-restriction` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,296 @@
---
title: request-id
keywords:
- Apache APISIX
- API Gateway
- Request ID
description: The request-id Plugin adds a unique ID to each request proxied through APISIX, which can be used to track API requests.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/request-id" />
</head>
## Description
The `request-id` Plugin adds a unique ID to each request proxied through APISIX, which can be used to track API requests. If a request carries an ID in the header corresponding to `header_name`, the Plugin will use the header value as the unique ID and will not overwrite with the automatically generated ID.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
| ------------------- | ------- | -------- | -------------- | ------------------------------- | ---------------------------------------------------------------------- |
| header_name | string | False | "X-Request-Id" | | Name of the header that carries the request unique ID. Note that if a request carries an ID in the `header_name` header, the Plugin will use the header value as the unique ID and will not overwrite it with the generated ID. |
| include_in_response | boolean | False | true | | If true, include the generated request ID in the response header, where the name of the header is the `header_name` value. |
| algorithm | string | False | "uuid" | ["uuid","nanoid","range_id"] | Algorithm used for generating the unique ID. When set to `uuid` , the Plugin generates a universally unique identifier. When set to `nanoid`, the Plugin generates a compact, URL-safe ID. When set to `range_id`, the Plugin generates a sequential ID with specific parameters. |
| range_id | object | False | | | Configuration for generating a request ID using the `range_id` algorithm. |
| range_id.char_set | string | False | "abcdefghijklmnopqrstuvwxyzABCDEFGHIGKLMNOPQRSTUVWXYZ0123456789" | minimum length 6 | Character set used for the `range_id` algorithm. |
| range_id.length | integer | False | 16 | >=6 | Length of the generated ID for the `range_id` algorithm. |
## Examples
The examples below demonstrate how you can configure `request-id` in different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Attach Request ID to Default Response Header
The following example demonstrates how to configure `request-id` on a Route which attaches a generated request ID to the default `X-Request-Id` response header, if the header value is not passed in the request. When the `X-Request-Id` header is set in the request, the Plugin will take the value in the request header as the request ID.
Create a Route with the `request-id` Plugin using its default configurations (explicitly defined):
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "request-id-route",
"uri": "/anything",
"plugins": {
"request-id": {
"header_name": "X-Request-Id",
"include_in_response": true,
"algorithm": "uuid"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should receive an `HTTP/1.1 200 OK` response and see the response includes the `X-Request-Id` header with a generated ID:
```text
X-Request-Id: b9b2c0d4-d058-46fa-bafc-dd91a0ccf441
```
Send a request to the Route with a custom request ID in the header:
```shell
curl -i "http://127.0.0.1:9080/anything" -H 'X-Request-Id: some-custom-request-id'
```
You should receive an `HTTP/1.1 200 OK` response and see the response includes the `X-Request-Id` header with the custom request ID:
```text
X-Request-Id: some-custom-request-id
```
### Attach Request ID to Custom Response Header
The following example demonstrates how to configure `request-id` on a Route which attaches a generated request ID to a specified header.
Create a Route with the `request-id` Plugin to define a custom header that carries the request ID and include the request ID in the response header:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "request-id-route",
"uri": "/anything",
"plugins": {
"request-id": {
"header_name": "X-Req-Identifier",
"include_in_response": true
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the route:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should receive an `HTTP/1.1 200 OK` response and see the response includes the `X-Req-Identifier` header with a generated ID:
```text
X-Req-Identifier: 1c42ff59-ee4c-4103-a980-8359f4135b21
```
### Hide Request ID in Response Header
The following example demonstrates how to configure `request-id` on a Route which attaches a generated request ID to a specified header. The header containing the request ID should be forwarded to the Upstream service but not returned in the response header.
Create a Route with the `request-id` Plugin to define a custom header that carries the request ID and not include the request ID in the response header:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "request-id-route",
"uri": "/anything",
"plugins": {
"request-id": {
"header_name": "X-Req-Identifier",
"include_in_response": false
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should receive an `HTTP/1.1 200 OK` response not and see `X-Req-Identifier` header among the response headers. In the response body, you should see:
```json
{
"args": {},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Host": "127.0.0.1",
"User-Agent": "curl/8.6.0",
"X-Amzn-Trace-Id": "Root=1-6752748c-7d364f48564508db1e8c9ea8",
"X-Forwarded-Host": "127.0.0.1",
"X-Req-Identifier": "268092bc-15e1-4461-b277-bf7775f2856f"
},
...
}
```
This shows the request ID is forwarded to the Upstream service but not returned in the response header.
### Use `nanoid` Algorithm
The following example demonstrates how to configure `request-id` on a Route and use the `nanoid` algorithm to generate the request ID.
Create a Route with the `request-id` Plugin as such:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "request-id-route",
"uri": "/anything",
"plugins": {
"request-id": {
"algorithm": "nanoid"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should receive an `HTTP/1.1 200 OK` response and see the response includes the `X-Req-Identifier` header with an ID generated using the `nanoid` algorithm:
```text
X-Request-Id: kepgHWCH2ycQ6JknQKrX2
```
### Attach Request ID Globally and on a Route
The following example demonstrates how to configure `request-id` as a global Plugin and on a Route to attach two IDs.
Create a global rule for the `request-id` Plugin which adds request ID to a custom header:
```shell
curl -i "http://127.0.0.1:9180/apisix/admin/global_rules" -X PUT -d '{
"id": "rule-for-request-id",
"plugins": {
"request-id": {
"header_name": "Global-Request-ID"
}
}
}'
```
Create a Route with the `request-id` Plugin which adds request ID to a different custom header:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "request-id-route",
"uri": "/anything",
"plugins": {
"request-id": {
"header_name": "Route-Request-ID"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should receive an `HTTP/1.1 200 OK` response and see the response includes the following headers:
```text
Global-Request-ID: 2e9b99c1-08ed-4a74-b347-49c0891b07ad
Route-Request-ID: d755666b-732c-4f0e-a30e-a7a71ace4e26
```

View File

@@ -0,0 +1,528 @@
---
title: request-validation
keywords:
- Apache APISIX
- API Gateway
- Request Validation
description: The request-validation Plugin validates requests before forwarding them to Upstream services. This Plugin uses JSON Schema for validation and can validate headers and body of a request.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/request-validation" />
</head>
## Description
The `request-validation` Plugin validates requests before forwarding them to Upstream services. This Plugin uses [JSON Schema](https://github.com/api7/jsonschema) for validation and can validate headers and body of a request.
See [JSON schema specification](https://json-schema.org/specification) to learn more about the syntax.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|---------------|---------|----------|---------|---------------|---------------------------------------------------|
| header_schema | object | False | | | Schema for the request header data. |
| body_schema | object | False | | | Schema for the request body data. |
| rejected_code | integer | False | 400 | [200,...,599] | Status code to return when rejecting requests. |
| rejected_msg | string | False | | | Message to return when rejecting requests. |
:::note
At least one of `header_schema` or `body_schema` should be filled in.
:::
## Examples
The examples below demonstrate how you can configure `request-validation` for different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Validate Request Header
The following example demonstrates how to validate request headers against a defined JSON schema, which requires two specific headers and the header value to conform to specified requirements.
Create a Route with `request-validation` Plugin as follows:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "request-validation-route",
"uri": "/get",
"plugins": {
"request-validation": {
"header_schema": {
"type": "object",
"required": ["User-Agent", "Host"],
"properties": {
"User-Agent": {
"type": "string",
"pattern": "^curl\/"
},
"Host": {
"type": "string",
"enum": ["httpbin.org", "httpbin"]
}
}
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
#### Verify with Request Conforming to the Schema
Send a request with header `Host: httpbin`, which complies with the schema:
```shell
curl -i "http://127.0.0.1:9080/get" -H "Host: httpbin"
```
You should receive an `HTTP/1.1 200 OK` response similar to the following:
```json
{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "httpbin",
"User-Agent": "curl/7.74.0",
"X-Amzn-Trace-Id": "Root=1-6509ae35-63d1e0fd3934e3f221a95dd8",
"X-Forwarded-Host": "httpbin"
},
"origin": "127.0.0.1, 183.17.233.107",
"url": "http://httpbin/get"
}
```
#### Verify with Request Not Conforming to the Schema
Send a request without any header:
```shell
curl -i "http://127.0.0.1:9080/get"
```
You should receive an `HTTP/1.1 400 Bad Request` response, showing that the request fails to pass validation:
```text
property "Host" validation failed: matches none of the enum value
```
Send a request with the required headers but with non-conformant header value:
```shell
curl -i "http://127.0.0.1:9080/get" -H "Host: httpbin" -H "User-Agent: cli-mock"
```
You should receive an `HTTP/1.1 400 Bad Request` response showing the `User-Agent` header value does not match the expected pattern:
```text
property "User-Agent" validation failed: failed to match pattern "^curl/" with "cli-mock"
```
### Customize Rejection Message and Status Code
The following example demonstrates how to customize response status and message when the validation fails.
Configure the Route with `request-validation` as follows:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "request-validation-route",
"uri": "/get",
"plugins": {
"request-validation": {
"header_schema": {
"type": "object",
"required": ["Host"],
"properties": {
"Host": {
"type": "string",
"enum": ["httpbin.org", "httpbin"]
}
}
},
"rejected_code": 403,
"rejected_msg": "Request header validation failed."
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request with a misconfigured `Host` in the header:
```shell
curl -i "http://127.0.0.1:9080/get" -H "Host: httpbin2"
```
You should receive an `HTTP/1.1 403 Forbidden` response with the custom message:
```text
Request header validation failed.
```
### Validate Request Body
The following example demonstrates how to validate request body against a defined JSON schema.
The `request-validation` Plugin supports validation of two types of media types:
* `application/json`
* `application/x-www-form-urlencoded`
#### Validate JSON Request Body
Create a Route with `request-validation` Plugin as follows:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "request-validation-route",
"uri": "/post",
"plugins": {
"request-validation": {
"header_schema": {
"type": "object",
"required": ["Content-Type"],
"properties": {
"Content-Type": {
"type": "string",
"pattern": "^application\/json$"
}
}
},
"body_schema": {
"type": "object",
"required": ["required_payload"],
"properties": {
"required_payload": {"type": "string"},
"boolean_payload": {"type": "boolean"},
"array_payload": {
"type": "array",
"minItems": 1,
"items": {
"type": "integer",
"minimum": 200,
"maximum": 599
},
"uniqueItems": true,
"default": [200]
}
}
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request with JSON body that conforms to the schema to verify:
```shell
curl -i "http://127.0.0.1:9080/post" -X POST \
-H "Content-Type: application/json" \
-d '{"required_payload":"hello", "array_payload":[301]}'
```
You should receive an `HTTP/1.1 200 OK` response similar to the following:
```json
{
"args": {},
"data": "{\"array_payload\":[301],\"required_payload\":\"hello\"}",
"files": {},
"form": {},
"headers": {
...
},
"json": {
"array_payload": [
301
],
"required_payload": "hello"
},
"origin": "127.0.0.1, 183.17.233.107",
"url": "http://127.0.0.1/post"
}
```
If you send a request without specifying `Content-Type: application/json`:
```shell
curl -i "http://127.0.0.1:9080/post" -X POST \
-d '{"required_payload":"hello,world"}'
```
You should receive an `HTTP/1.1 400 Bad Request` response similar to the following:
```text
property "Content-Type" validation failed: failed to match pattern "^application/json$" with "application/x-www-form-urlencoded"
```
Similarly, if you send a request without the required JSON field `required_payload`:
```shell
curl -i "http://127.0.0.1:9080/post" -X POST \
-H "Content-Type: application/json" \
-d '{}'
```
You should receive an `HTTP/1.1 400 Bad Request` response:
```text
property "required_payload" is required
```
#### Validate URL-Encoded Form Body
Create a Route with `request-validation` Plugin as follows:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "request-validation-route",
"uri": "/post",
"plugins": {
"request-validation": {
"header_schema": {
"type": "object",
"required": ["Content-Type"],
"properties": {
"Content-Type": {
"type": "string",
"pattern": "^application\/x-www-form-urlencoded$"
}
}
},
"body_schema": {
"type": "object",
"required": ["required_payload","enum_payload"],
"properties": {
"required_payload": {"type": "string"},
"enum_payload": {
"type": "string",
"enum": ["enum_string_1", "enum_string_2"],
"default": "enum_string_1"
}
}
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request with URL-encoded form data to verify:
```shell
curl -i "http://127.0.0.1:9080/post" -X POST \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "required_payload=hello&enum_payload=enum_string_1"
```
You should receive an `HTTP/1.1 400 Bad Request` response similar to the following:
```json
{
"args": {},
"data": "",
"files": {},
"form": {
"enum_payload": "enum_string_1",
"required_payload": "hello"
},
"headers": {
...
},
"json": null,
"origin": "127.0.0.1, 183.17.233.107",
"url": "http://127.0.0.1/post"
}
```
Send a request without the URL-encoded field `enum_payload`:
```shell
curl -i "http://127.0.0.1:9080/post" -X POST \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "required_payload=hello"
```
You should receive an `HTTP/1.1 400 Bad Request` of the following:
```text
property "enum_payload" is required
```
## Appendix: JSON Schema
The following section provides boilerplate JSON schema for you to adjust, combine, and use with this Plugin. For a complete reference, see [JSON schema specification](https://json-schema.org/specification).
### Enumerated Values
```json
{
"body_schema": {
"type": "object",
"required": ["enum_payload"],
"properties": {
"enum_payload": {
"type": "string",
"enum": ["enum_string_1", "enum_string_2"],
"default": "enum_string_1"
}
}
}
}
```
### Boolean Values
```json
{
"body_schema": {
"type": "object",
"required": ["bool_payload"],
"properties": {
"bool_payload": {
"type": "boolean",
"default": true
}
}
}
}
```
### Numeric Values
```json
{
"body_schema": {
"type": "object",
"required": ["integer_payload"],
"properties": {
"integer_payload": {
"type": "integer",
"minimum": 1,
"maximum": 65535
}
}
}
}
```
### Strings
```json
{
"body_schema": {
"type": "object",
"required": ["string_payload"],
"properties": {
"string_payload": {
"type": "string",
"minLength": 1,
"maxLength": 32
}
}
}
}
```
### RegEx for Strings
```json
{
"body_schema": {
"type": "object",
"required": ["regex_payload"],
"properties": {
"regex_payload": {
"type": "string",
"minLength": 1,
"maxLength": 32,
"pattern": "[[^[a-zA-Z0-9_]+$]]"
}
}
}
}
```
### Arrays
```json
{
"body_schema": {
"type": "object",
"required": ["array_payload"],
"properties": {
"array_payload": {
"type": "array",
"minItems": 1,
"items": {
"type": "integer",
"minimum": 200,
"maximum": 599
},
"uniqueItems": true,
"default": [200, 302]
}
}
}
}
```

View File

@@ -0,0 +1,313 @@
---
title: response-rewrite
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Response Rewrite
- response-rewrite
description: The response-rewrite Plugin offers options to rewrite responses that APISIX and its Upstream services return to clients. With the Plugin, you can modify HTTP status codes, request headers, response body, and more.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/response-rewrite" />
</head>
## Description
The `response-rewrite` Plugin offers options to rewrite responses that APISIX and its Upstream services return to clients. With the Plugin, you can modify HTTP status codes, request headers, response body, and more.
For instance, you can use this Plugin to:
- Support [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) by setting `Access-Control-Allow-*` headers.
- Indicate redirection by setting HTTP status codes and `Location` header.
:::tip
You can also use the [redirect](./redirect.md) Plugin to set up redirects.
:::
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|-----------------|---------|----------|---------|---------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| status_code | integer | False | | [200, 598] | New HTTP status code in the response. If unset, falls back to the original status code. |
| body | string | False | | | New response body. The `Content-Length` header would also be reset. Should not be configured with `filters`. |
| body_base64 | boolean | False | false | | If true, decode the response body configured in `body` before sending to client, which is useful for image and protobuf decoding. Note that this configuration cannot be used to decode Upstream response. |
| headers | object | False | | | Actions to be executed in the order of `add`, `remove`, and `set`. |
| headers.add | array[string] | False | | | Headers to append to requests. If a header already present in the request, the header value will be appended. Header value could be set to a constant, or one or more [Nginx variables](https://nginx.org/en/docs/http/ngx_http_core_module.html). |
| headers.set | object | False | | |Headers to set to requests. If a header already present in the request, the header value will be overwritten. Header value could be set to a constant, or one or more[Nginx variables](https://nginx.org/en/docs/http/ngx_http_core_module.html). |
| headers.remove | array[string] | False | | | Headers to remove from requests. |
| vars | array[array] | False | | | An array of one or more matching conditions in the form of [lua-resty-expr](https://github.com/api7/lua-resty-expr#operator-list). |
| filters | array[object] | False | | | List of filters that modify the response body by replacing one specified string with another. Should not be configured with `body`. |
| filters.regex | string | True | | | RegEx pattern to match on the response body. |
| filters.scope | string | False | "once" | ["once","global"] | Scope of substitution. `once` substitutes the first matched instance and `global` substitutes globally. |
| filters.replace | string | True | | | Content to substitute with. |
| filters.options | string | False | "jo" | | RegEx options to control how the match operation should be performed. See [Lua NGINX module](https://github.com/openresty/lua-nginx-module#ngxrematch) for the available options. |
## Examples
The examples below demonstrate how you can configure `response-rewrite` on a Route in different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Rewrite Header and Body
The following example demonstrates how to add response body and headers, only to responses with `200` HTTP status codes.
Create a Route with the `response-rewrite` Plugin:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "response-rewrite-route",
"methods": ["GET"],
"uri": "/headers",
"plugins": {
"response-rewrite": {
"body": "{\"code\":\"ok\",\"message\":\"new json body\"}",
"headers": {
"set": {
"X-Server-id": 3,
"X-Server-status": "on",
"X-Server-balancer-addr": "$balancer_ip:$balancer_port"
}
},
"vars": [
[ "status","==",200 ]
]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to verify:
```shell
curl -i "http://127.0.0.1:9080/headers"
```
You should receive a `HTTP/1.1 200 OK` response similar to the following:
```text
...
X-Server-id: 3
X-Server-status: on
X-Server-balancer-addr: 50.237.103.220:80
{"code":"ok","message":"new json body"}
```
### Rewrite Header With RegEx Filter
The following example demonstrates how to use RegEx filter matching to replace `X-Amzn-Trace-Id` for responses.
Create a Route with the `response-rewrite` Plugin:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "response-rewrite-route",
"methods": ["GET"],
"uri": "/headers",
"plugins":{
"response-rewrite":{
"filters":[
{
"regex":"X-Amzn-Trace-Id",
"scope":"global",
"replace":"X-Amzn-Trace-Id-Replace"
}
]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to verify:
```shell
curl -i "http://127.0.0.1:9080/headers"
```
You should see a response similar to the following:
```text
{
"headers": {
"Accept": "*/*",
"Host": "127.0.0.1",
"User-Agent": "curl/8.2.1",
"X-Amzn-Trace-Id-Replace": "Root=1-6500095d-1041b05e2ba9c6b37232dbc7",
"X-Forwarded-Host": "127.0.0.1"
}
}
```
### Decode Body from Base64
The following example demonstrates how to Decode Body from Base64 format.
Create a Route with the `response-rewrite` Plugin:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "response-rewrite-route",
"methods": ["GET"],
"uri": "/get",
"plugins":{
"response-rewrite": {
"body": "SGVsbG8gV29ybGQ=",
"body_base64": true
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to verify:
```shell
curl "http://127.0.0.1:9080/get"
```
You should see a response of the following:
```text
Hello World
```
### Rewrite Response and Its Connection with Execution Phases
The following example demonstrates the connection between the `response-rewrite` Plugin and [execution phases](/apisix/key-concepts/plugins#plugins-execution-lifecycle) by configuring the Plugin with the `key-auth` Plugin, and see how the response is still rewritten to `200 OK` in the case of an unauthenticated request.
Create a Consumer `jack`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jack"
}'
```
Create `key-auth` credential for the Consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jack/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jack-key-auth",
"plugins": {
"key-auth": {
"key": "jack-key"
}
}
}'
```
Create a Route with `key-auth` and configure `response-rewrite` to rewrite the response status code and body:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "response-rewrite-route",
"uri": "/get",
"plugins": {
"key-auth": {},
"response-rewrite": {
"status_code": 200,
"body": "{\"code\": 200, \"msg\": \"success\"}"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route with the valid key:
```shell
curl -i "http://127.0.0.1:9080/get" -H 'apikey: jack-key'
```
You should receive an `HTTP/1.1 200 OK` response of the following:
```text
{"code": 200, "msg": "success"}
```
Send a request to the Route without any key:
```shell
curl -i "http://127.0.0.1:9080/get"
```
You should still receive an `HTTP/1.1 200 OK` response of the same, instead of `HTTP/1.1 401 Unauthorized` from the `key-auth` Plugin. This shows that the `response-rewrite` Plugin still rewrites the response.
This is because **header_filter** and **body_filter** phase logics of the `response-rewrite` Plugin will continue to run after [`ngx.exit`](https://openresty-reference.readthedocs.io/en/latest/Lua_Nginx_API/#ngxexit) in the **access** or **rewrite** phases from other plugins.
The following table summarizes the impact of `ngx.exit` on execution phases.
| Phase | rewrite | access | header_filter | body_filter |
|---------------|----------|----------|---------------|-------------|
| **rewrite** | ngx.exit | | | |
| **access** | × | ngx.exit | | |
| **header_filter** | ✓ | ✓ | ngx.exit | |
| **body_filter** | ✓ | ✓ | × | ngx.exit |
For example, if `ngx.exit` takes places in the **rewrite** phase, it will interrupt the execution of **access** phase but not interfere with **header_filter** and **body_filter** phases.

View File

@@ -0,0 +1,280 @@
---
title: rocketmq-logger
keywords:
- Apache APISIX
- API Gateway
- Plugin
- RocketMQ Logger
description: This document contains information about the Apache APISIX rocketmq-logger Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `rocketmq-logger` Plugin provides the ability to push logs as JSON objects to your RocketMQ clusters.
It might take some time to receive the log data. It will be automatically sent after the timer function in the [batch processor](../batch-processor.md) expires.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|------------------------|---------|----------|-------------------------------------------------------------------------------|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| nameserver_list | object | True | | | List of RocketMQ nameservers. |
| topic | string | True | | | Target topic to push the data to. |
| key | string | False | | | Key of the messages. |
| tag | string | False | | | Tag of the messages. |
| log_format | object | False | | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
| timeout | integer | False | 3 | [1,...] | Timeout for the upstream to send data. |
| use_tls | boolean | False | false | | When set to `true`, uses TLS. |
| access_key | string | False | "" | | Access key for ACL. Setting to an empty string will disable the ACL. |
| secret_key | string | False | "" | | secret key for ACL. |
| name | string | False | "rocketmq logger" | | Unique identifier for the batch processor. If you use Prometheus to monitor APISIX metrics, the name is exported in `apisix_batch_process_entries`. processor. |
| meta_format | enum | False | "default" | ["default""origin"] | Format to collect the request information. Setting to `default` collects the information in JSON format and `origin` collects the information with the original HTTP request. See [examples](#meta_format-example) below. |
| include_req_body | boolean | False | false | [false, true] | When set to `true` includes the request body in the log. If the request body is too big to be kept in the memory, it can't be logged due to Nginx's limitations. |
| include_req_body_expr | array | False | | | Filter for when the `include_req_body` attribute is set to `true`. Request body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
| include_resp_body | boolean | False | false | [false, true] | When set to `true` includes the response body in the log. |
| include_resp_body_expr | array | False | | | Filter for when the `include_resp_body` attribute is set to `true`. Response body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
NOTE: `encrypt_fields = {"secret_key"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
:::info IMPORTANT
The data is first written to a buffer. When the buffer exceeds the `batch_max_size` or `buffer_duration` attribute, the data is sent to the RocketMQ server and the buffer is flushed.
If the process is successful, it will return `true` and if it fails, returns `nil` with a string with the "buffer overflow" error.
:::
### meta_format example
- default:
```json
{
"upstream": "127.0.0.1:1980",
"start_time": 1619414294760,
"client_ip": "127.0.0.1",
"service_id": "",
"route_id": "1",
"request": {
"querystring": {
"ab": "cd"
},
"size": 90,
"uri": "/hello?ab=cd",
"url": "http://localhost:1984/hello?ab=cd",
"headers": {
"host": "localhost",
"content-length": "6",
"connection": "close"
},
"method": "GET"
},
"response": {
"headers": {
"connection": "close",
"content-type": "text/plain; charset=utf-8",
"date": "Mon, 26 Apr 2021 05:18:14 GMT",
"server": "APISIX/2.5",
"transfer-encoding": "chunked"
},
"size": 190,
"status": 200
},
"server": {
"hostname": "localhost",
"version": "2.5"
},
"latency": 0
}
```
- origin:
```http
GET /hello?ab=cd HTTP/1.1
host: localhost
content-length: 6
connection: close
abcdef
```
### meta_format example
- `default`:
```json
{
"upstream": "127.0.0.1:1980",
"start_time": 1619414294760,
"client_ip": "127.0.0.1",
"service_id": "",
"route_id": "1",
"request": {
"querystring": {
"ab": "cd"
},
"size": 90,
"uri": "/hello?ab=cd",
"url": "http://localhost:1984/hello?ab=cd",
"headers": {
"host": "localhost",
"content-length": "6",
"connection": "close"
},
"body": "abcdef",
"method": "GET"
},
"response": {
"headers": {
"connection": "close",
"content-type": "text/plain; charset=utf-8",
"date": "Mon, 26 Apr 2021 05:18:14 GMT",
"server": "APISIX/2.5",
"transfer-encoding": "chunked"
},
"size": 190,
"status": 200
},
"server": {
"hostname": "localhost",
"version": "2.5"
},
"latency": 0
}
```
- `origin`:
```http
GET /hello?ab=cd HTTP/1.1
host: localhost
content-length: 6
connection: close
abcdef
```
## Metadata
You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
| Name | Type | Required | Default | Description |
|------------|--------|----------|-------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| log_format | object | False | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
:::info IMPORTANT
Configuring the Plugin metadata is global in scope. This means that it will take effect on all Routes and Services which use the `rocketmq-logger` Plugin.
:::
The example below shows how you can configure through the Admin API:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/rocketmq-logger -H "X-API-KEY: $admin_key" -X PUT -d '
{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr"
}
}'
```
With this configuration, your logs would be formatted as shown below:
```shell
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
```
## Enable Plugin
The example below shows how you can enable the `rocketmq-logger` Plugin on a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/5 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"rocketmq-logger": {
"nameserver_list" : [ "127.0.0.1:9876" ],
"topic" : "test2",
"batch_max_size": 1,
"name": "rocketmq logger"
}
},
"upstream": {
"nodes": {
"127.0.0.1:1980": 1
},
"type": "roundrobin"
},
"uri": "/hello"
}'
```
This Plugin also supports pushing to more than one nameserver at a time. You can specify multiple nameserver in the Plugin configuration as shown below:
```json
"nameserver_list" : [
"127.0.0.1:9876",
"127.0.0.2:9876"
]
```
## Example usage
Now, if you make a request to APISIX, it will be logged in your RocketMQ server:
```shell
curl -i http://127.0.0.1:9080/hello
```
## Delete Plugin
To remove the `rocketmq-logger` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload, and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,118 @@
---
title: server-info
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Server info
- server-info
description: This document contains information about the Apache APISIX server-info Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `server-info` Plugin periodically reports basic server information to etcd.
:::warning
The `server-info` Plugin is deprecated and will be removed in a future release. For more details about the deprecation and removal plan, please refer to [this discussion](https://github.com/apache/apisix/discussions/12298).
:::
The information reported by the Plugin is explained below:
| Name | Type | Description |
|--------------|---------|------------------------------------------------------------------------------------------------------------------------|
| boot_time | integer | Bootstrap time (UNIX timestamp) of the APISIX instance. Resets when hot updating but not when APISIX is just reloaded. |
| id | string | APISIX instance ID. |
| etcd_version | string | Version of the etcd cluster used by APISIX. Will be `unknown` if the network to etcd is partitioned. |
| version | string | Version of APISIX instance. |
| hostname | string | Hostname of the machine/pod APISIX is deployed to. |
## Attributes
None.
## API
This Plugin exposes the endpoint `/v1/server_info` to the [Control API](../control-api.md)
## Enable Plugin
Add `server-info` to the Plugin list in your configuration file (`conf/config.yaml`):
```yaml title="conf/config.yaml"
plugins:
- ...
- server-info
```
## Customizing server info report configuration
We can change the report configurations in the `plugin_attr` section of `conf/config.yaml`.
The following configurations of the server info report can be customized:
| Name | Type | Default | Description |
| ------------ | ------ | -------- | -------------------------------------------------------------------- |
| report_ttl | integer | 36 | Time in seconds after which the report is deleted from etcd (maximum: 86400, minimum: 3). |
To customize, you can modify the `plugin_attr` attribute in your configuration file (`conf/config.yaml`):
```yaml title="conf/config.yaml"
plugin_attr:
server-info:
report_ttl: 60
```
## Example usage
After you enable the Plugin as mentioned above, you can access the server info report through the Control API:
```shell
curl http://127.0.0.1:9090/v1/server_info -s | jq .
```
```json
{
"etcd_version": "3.5.0",
"id": "b7ce1c5c-b1aa-4df7-888a-cbe403f3e948",
"hostname": "fedora32",
"version": "2.1",
"boot_time": 1608522102
}
```
:::tip
You can also view the server info report through the [APISIX Dashboard](/docs/dashboard/USER_GUIDE).
:::
## Delete Plugin
To remove the Plugin, you can remove `server-info` from the list of Plugins in your configuration file:
```yaml title="conf/config.yaml"
plugins:
- ...
```

View File

@@ -0,0 +1,144 @@
---
title: serverless
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Serverless
description: This document contains information about the Apache APISIX serverless Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
There are two `serverless` Plugins in APISIX: `serverless-pre-function` and `serverless-post-function`. The former runs at the beginning of the specified phase, while the latter runs at the end of the specified phase.
Both Plugins have the same attributes.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|-----------|---------------|----------|------------|------------------------------------------------------------------------------|------------------------------------------------------------------|
| phase | string | False | ["access"] | ["rewrite", "access", "header_filter", "body_filter", "log", "before_proxy"] | Phase before or after which the serverless function is executed. |
| functions | array[string] | True | | | List of functions that are executed sequentially. |
:::info IMPORTANT
Only Lua functions are allowed here and not other Lua code.
For example, anonymous functions are legal:
```lua
return function()
ngx.log(ngx.ERR, 'one')
end
```
Closures are also legal:
```lua
local count = 1
return function()
count = count + 1
ngx.say(count)
end
```
But code other than functions are illegal:
```lua
local count = 1
ngx.say(count)
```
:::
:::note
From v2.6, `conf` and `ctx` are passed as the first two arguments to a serverless function like regular Plugins.
Prior to v2.12.0, the phase `before_proxy` was called `balancer`. This was updated considering that this method would run after `access` and before the request goes Upstream and is unrelated to `balancer`.
:::
## Enable Plugin
The example below enables the Plugin on a specific Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/index.html",
"plugins": {
"serverless-pre-function": {
"phase": "rewrite",
"functions" : ["return function() ngx.log(ngx.ERR, \"serverless pre function\"); end"]
},
"serverless-post-function": {
"phase": "rewrite",
"functions" : ["return function(conf, ctx) ngx.log(ngx.ERR, \"match uri \", ctx.curr_req_matched and ctx.curr_req_matched._path); end"]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```
## Example usage
Once you have configured the Plugin as shown above, you can make a request as shown below:
```shell
curl -i http://127.0.0.1:9080/index.html
```
You will find a message "serverless pre-function" and "match uri /index.html" in the error.log.
## Delete Plugin
To remove the `serverless` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/index.html",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,344 @@
---
title: skywalking-logger
keywords:
- Apache APISIX
- API Gateway
- Plugin
- SkyWalking Logger
- skywalking-logger
description: The skywalking-logger pushes request and response logs as JSON objects to SkyWalking OAP server in batches and supports the customization of log formats.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/skywalking-logger" />
</head>
## Description
The `skywalking-logger` Plugin pushes request and response logs as JSON objects to SkyWalking OAP server in batches and supports the customization of log formats.
If there is an existing tracing context, it sets up the trace-log correlation automatically and relies on [SkyWalking Cross Process Propagation Headers Protocol](https://skywalking.apache.org/docs/main/next/en/api/x-process-propagation-headers-v3/).
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|-----------------------|---------|----------|------------------------|---------------|--------------------------------------------------------------------------------------------------------------|
| endpoint_addr | string | True | | | URI of the SkyWalking OAP server. |
| service_name | string | False | "APISIX" | | Service name for the SkyWalking reporter. |
| service_instance_name | string | False | "APISIX Instance Name" | | Service instance name for the SkyWalking reporter. Set it to `$hostname` to directly get the local hostname. |
| log_format | object | False | | Custom log format in key-value pairs in JSON format. Support [APISIX](../apisix-variable.md) or [Nginx variables](http://nginx.org/en/docs/varindex.html) in values if the string starts with `$`. |
| timeout | integer | False | 3 | [1,...] | Time to keep the connection alive for after sending a request. |
| name | string | False | "skywalking logger" | | Unique identifier to identify the logger. If you use Prometheus to monitor APISIX metrics, the name is exported in `apisix_batch_process_entries`. |
| include_req_body | boolean | False | false | If true, include the request body in the log. Note that if the request body is too big to be kept in the memory, it can not be logged due to NGINX's limitations. |
| include_req_body_expr | array[array] | False | | An array of one or more conditions in the form of [lua-resty-expr](https://github.com/api7/lua-resty-expr). Used when the `include_req_body` is true. Request body would only be logged when the expressions configured here evaluate to true. |
| include_resp_body | boolean | False | false | If true, include the response body in the log. |
| include_resp_body_expr | array[array] | False | | An array of one or more conditions in the form of [lua-resty-expr](https://github.com/api7/lua-resty-expr). Used when the `include_resp_body` is true. Response body would only be logged when the expressions configured here evaluate to true. |
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
## Metadata
You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
| Name | Type | Required | Default | Description |
| ---------- | ------ | -------- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log_format | object | False | | Custom log format in key-value pairs in JSON format. Support [APISIX](../apisix-variable.md) or [NGINX variables](http://nginx.org/en/docs/varindex.html) in values. |
## Examples
The examples below demonstrate how you can configure `skywalking-logger` Plugin for different scenarios.
To follow along the example, start a storage, OAP and Booster UI with Docker Compose, following [Skywalking's documentation](https://skywalking.apache.org/docs/main/next/en/setup/backend/backend-docker/). Once set up, the OAP server should be listening on `12800` and you should be able to access the UI at [http://localhost:8080](http://localhost:8080).
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Log Requests in Default Log Format
The following example demonstrates how you can configure the `skywalking-logger` Plugin on a Route to log information of requests hitting the Route.
Create a Route with the `skywalking-logger` Plugin and configure the Plugin with your OAP server URI:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "skywalking-logger-route",
"uri": "/anything",
"plugins": {
"skywalking-logger": {
"endpoint_addr": "http://192.168.2.103:12800"
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Send a request to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should receive an `HTTP/1.1 200 OK` response.
In [Skywalking UI](http://localhost:8080), navigate to __General Service__ > __Services__. You should see a service called `APISIX` with a log entry corresponding to your request:
```json
{
"upstream_latency": 674,
"request": {
"method": "GET",
"headers": {
"user-agent": "curl/8.6.0",
"host": "127.0.0.1:9080",
"accept": "*/*"
},
"url": "http://127.0.0.1:9080/anything",
"size": 85,
"querystring": {},
"uri": "/anything"
},
"client_ip": "192.168.65.1",
"route_id": "skywalking-logger-route",
"start_time": 1736945107345,
"upstream": "3.210.94.60:80",
"server": {
"version": "3.11.0",
"hostname": "7edbcebe8eb3"
},
"service_id": "",
"response": {
"size": 619,
"status": 200,
"headers": {
"content-type": "application/json",
"date": "Thu, 16 Jan 2025 12:45:08 GMT",
"server": "APISIX/3.11.0",
"access-control-allow-origin": "*",
"connection": "close",
"access-control-allow-credentials": "true",
"content-length": "391"
}
},
"latency": 764.9998664856,
"apisix_latency": 90.999866485596
}
```
### Log Request and Response Headers With Plugin Metadata
The following example demonstrates how you can customize log format using Plugin metadata and built-in variables to log specific headers from request and response.
In APISIX, Plugin metadata is used to configure the common metadata fields of all Plugin instances of the same Plugin. It is useful when a Plugin is enabled across multiple resources and requires a universal update to their metadata fields.
First, create a Route with the `skywalking-logger` Plugin and configure the Plugin with your OAP server URI:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "skywalking-logger-route",
"uri": "/anything",
"plugins": {
"skywalking-logger": {
"endpoint_addr": "http://192.168.2.103:12800"
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Next, configure the Plugin metadata for `skywalking-logger` to log the custom request header `env` and the response header `Content-Type`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/plugin_metadata/skywalking-logger" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr",
"env": "$http_env",
"resp_content_type": "$sent_http_Content_Type"
}
}'
```
Send a request to the Route with the `env` header:
```shell
curl -i "http://127.0.0.1:9080/anything" -H "env: dev"
```
You should receive an `HTTP/1.1 200 OK` response. In [Skywalking UI](http://localhost:8080), navigate to __General Service__ > __Services__. You should see a service called `APISIX` with a log entry corresponding to your request:
```json
[
{
"route_id": "skywalking-logger-route",
"client_ip": "192.168.65.1",
"@timestamp": "2025-01-16T12:51:53+00:00",
"host": "127.0.0.1",
"env": "dev",
"resp_content_type": "application/json"
}
]
```
### Log Request Bodies Conditionally
The following example demonstrates how you can conditionally log request body.
Create a Route with the `skywalking-logger` Plugin as such, to only include request body if the URL query string `log_body` is `yes`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "skywalking-logger-route",
"uri": "/anything",
"plugins": {
"skywalking-logger": {
"endpoint_addr": "http://192.168.2.103:12800",
"include_req_body": true,
"include_req_body_expr": [["arg_log_body", "==", "yes"]]
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Send a request to the Route with a URL query string satisfying the condition:
```shell
curl -i "http://127.0.0.1:9080/anything?log_body=yes" -X POST -d '{"env": "dev"}'
```
You should receive an `HTTP/1.1 200 OK` response. In [Skywalking UI](http://localhost:8080), navigate to __General Service__ > __Services__. You should see a service called `APISIX` with a log entry corresponding to your request, with the request body logged:
```json
[
{
"request": {
"url": "http://127.0.0.1:9080/anything?log_body=yes",
"querystring": {
"log_body": "yes"
},
"uri": "/anything?log_body=yes",
...,
"body": "{\"env\": \"dev\"}",
},
...
}
]
```
Send a request to the Route without any URL query string:
```shell
curl -i "http://127.0.0.1:9080/anything" -X POST -d '{"env": "dev"}'
```
You should not observe a log entry without the request body.
:::info
If you have customized the `log_format` in addition to setting `include_req_body` or `include_resp_body` to `true`, the Plugin would not include the bodies in the logs.
As a workaround, you may be able to use the NGINX variable `$request_body` in the log format, such as:
```json
{
"skywalking-logger": {
...,
"log_format": {"body": "$request_body"}
}
}
```
:::
### Associate Traces with Logs
The following example demonstrates how you can configure the `skywalking-logger` Plugin on a Route to log information of requests hitting the route.
Create a Route with the `skywalking-logger` Plugin and configure the Plugin with your OAP server URI:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "skywalking-logger-route",
"uri": "/anything",
"plugins": {
"skywalking": {
"sample_ratio": 1
},
"skywalking-logger": {
"endpoint_addr": "http://192.168.2.103:12800"
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Generate a few requests to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should receive `HTTP/1.1 200 OK` responses.
In [Skywalking UI](http://localhost:8080), navigate to __General Service__ > __Services__. You should see a service called `APISIX` with a trace corresponding to your request, where you can view the associated logs:
![trace context](https://static.apiseven.com/uploads/2025/01/16/soUpXm6b_trace-view-logs.png)
![associated log](https://static.apiseven.com/uploads/2025/01/16/XD934LvU_associated-logs.png)

View File

@@ -0,0 +1,176 @@
---
title: skywalking
keywords:
- Apache APISIX
- API Gateway
- Plugin
- SkyWalking
description: The skywalking Plugin supports the integrating with Apache SkyWalking for request tracing.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/skywalking" />
</head>
## Description
The `skywalking` Plugin supports the integrating with [Apache SkyWalking](https://skywalking.apache.org) for request tracing.
SkyWalking uses its native Nginx Lua tracer to provide tracing, topology analysis, and metrics from both service and URI perspectives. APISIX supports HTTP protocol to interact with the SkyWalking server.
The server currently supports two protocols: HTTP and gRPC. In APISIX, only HTTP is currently supported.
## Static Configurations
By default, service names and endpoint address for the Plugin are pre-configured in the [default configuration](https://github.com/apache/apisix/blob/master/apisix/cli/config.lua).
To customize these values, add the corresponding configurations to `config.yaml`. For example:
```yaml
plugin_attr:
skywalking:
report_interval: 3 # Reporting interval time in seconds.
service_name: APISIX # Service name for SkyWalking reporter.
service_instance_name: "APISIX Instance Name" # Service instance name for SkyWalking reporter.
# Set to $hostname to get the local hostname.
endpoint_addr: http://127.0.0.1:12800 # SkyWalking HTTP endpoint.
```
Reload APISIX for changes to take effect.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|--------------|--------|----------|---------|--------------|----------------------------------------------------------------------------|
| sample_ratio | number | True | 1 | [0.00001, 1] | Frequency of request sampling. Setting the sample ratio to `1` means to sample all requests. |
## Example
To follow along the example, start a storage, OAP and Booster UI with Docker Compose, following [Skywalking's documentation](https://skywalking.apache.org/docs/main/next/en/setup/backend/backend-docker/). Once set up, the OAP server should be listening on `12800` and you should be able to access the UI at [http://localhost:8080](http://localhost:8080).
Update APISIX configuration file to enable the `skywalking` plugin, which is disabled by default, and update the endpoint address:
```yaml title="config.yaml"
plugins:
- skywalking
- ...
plugin_attr:
skywalking:
report_interval: 3
service_name: APISIX
service_instance_name: APISIX Instance
endpoint_addr: http://192.168.2.103:12800
```
Reload APISIX for configuration changes to take effect.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Trace All Requests
The following example demonstrates how you can trace all requests passing through a Route.
Create a Route with `skywalking` and configure the sampling ratio to be 1 to trace all requests:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "skywalking-route",
"uri": "/anything",
"plugins": {
"skywalking": {
"sample_ratio": 1
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Send a few requests to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should receive `HTTP/1.1 200 OK` responses.
In [Skywalking UI](http://localhost:8080), navigate to __General Service__ > __Services__. You should see a service called `APISIX` with traces corresponding to your requests:
![SkyWalking APISIX traces](https://static.apiseven.com/uploads/2025/01/15/UdwiO8NJ_skywalking-traces.png)
### Associate Traces with Logs
The following example demonstrates how you can configure the `skywalking-logger` Plugin on a Route to log information of requests hitting the Route.
Create a Route with the `skywalking-logger` Plugin and configure the Plugin with your OAP server URI:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "skywalking-logger-route",
"uri": "/anything",
"plugins": {
"skywalking": {
"sample_ratio": 1
},
"skywalking-logger": {
"endpoint_addr": "http://192.168.2.103:12800"
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
```
Generate a few requests to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should receive `HTTP/1.1 200 OK` responses.
In [Skywalking UI](http://localhost:8080), navigate to __General Service__ > __Services__. You should see a service called `APISIX` with a trace corresponding to your request, where you can view the associated logs:
![trace context](https://static.apiseven.com/uploads/2025/01/16/soUpXm6b_trace-view-logs.png)
![associated log](https://static.apiseven.com/uploads/2025/01/16/XD934LvU_associated-logs.png)

View File

@@ -0,0 +1,184 @@
---
title: sls-logger
keywords:
- Apache APISIX
- API Gateway
- Plugin
- SLS Logger
- Alibaba Cloud Log Service
description: This document contains information about the Apache APISIX sls-logger Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `sls-logger` Plugin is used to push logs to [Alibaba Cloud log Service](https://www.alibabacloud.com/help/en/log-service/latest/use-the-syslog-protocol-to-upload-logs) using [RF5424](https://tools.ietf.org/html/rfc5424).
It might take some time to receive the log data. It will be automatically sent after the timer function in the [batch processor](../batch-processor.md) expires.
## Attributes
| Name | Required | Description |
|-------------------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| host | True | IP address or the hostname of the TCP server. See [Alibaba Cloud log service documentation](https://www.alibabacloud.com/help/en/log-service/latest/endpoints) for details. Use IP address instead of domain. |
| port | True | Target upstream port. Defaults to `10009`. |
| timeout | False | Timeout for the upstream to send data. |
| log_format | False | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
| project | True | Project name in Alibaba Cloud log service. Create SLS before using this Plugin. |
| logstore | True | logstore name in Ali Cloud log service. Create SLS before using this Plugin. |
| access_key_id | True | AccessKey ID in Alibaba Cloud. See [Authorization](https://www.alibabacloud.com/help/en/log-service/latest/create-a-ram-user-and-authorize-the-ram-user-to-access-log-service) for more details. |
| access_key_secret | True | AccessKey Secret in Alibaba Cloud. See [Authorization](https://www.alibabacloud.com/help/en/log-service/latest/create-a-ram-user-and-authorize-the-ram-user-to-access-log-service) for more details. |
| include_req_body | True | When set to `true`, includes the request body in the log. |
| include_req_body_expr | No | Filter for when the `include_req_body` attribute is set to `true`. Request body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
| include_resp_body | No | When set to `true` includes the response body in the log. |
| include_resp_body_expr | No | Filter for when the `include_resp_body` attribute is set to `true`. Response body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
| name | False | Unique identifier for the batch processor. If you use Prometheus to monitor APISIX metrics, the name is exported in `apisix_batch_process_entries`. |
NOTE: `encrypt_fields = {"access_key_secret"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
### Example of default log format
```json
{
"route_conf": {
"host": "100.100.99.135",
"buffer_duration": 60,
"timeout": 30000,
"include_req_body": false,
"logstore": "your_logstore",
"log_format": {
"vip": "$remote_addr"
},
"project": "your_project",
"inactive_timeout": 5,
"access_key_id": "your_access_key_id",
"access_key_secret": "your_access_key_secret",
"batch_max_size": 1000,
"max_retry_count": 0,
"retry_delay": 1,
"port": 10009,
"name": "sls-logger"
},
"data": "<46>1 2024-01-06T03:29:56.457Z localhost apisix 28063 - [logservice project=\"your_project\" logstore=\"your_logstore\" access-key-id=\"your_access_key_id\" access-key-secret=\"your_access_key_secret\"] {\"vip\":\"127.0.0.1\",\"route_id\":\"1\"}\n"
}
```
## Metadata
You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
| Name | Type | Required | Default | Description |
| ---------- | ------ | -------- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log_format | object | False | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
:::info IMPORTANT
Configuring the Plugin metadata is global in scope. This means that it will take effect on all Routes and Services which use the `sls-logger` Plugin.
:::
The example below shows how you can configure through the Admin API:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/sls-logger -H "X-API-KEY: $admin_key" -X PUT -d '
{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr"
}
}'
```
With this configuration, your logs would be formatted as shown below:
```shell
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
```
## Enable Plugin
The example below shows how you can configure the Plugin on a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/5 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"sls-logger": {
"host": "100.100.99.135",
"port": 10009,
"project": "your_project",
"logstore": "your_logstore",
"access_key_id": "your_access_key_id",
"access_key_secret": "your_access_key_secret",
"timeout": 30000
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"uri": "/hello"
}'
```
## Example usage
Now, if you make a request to APISIX, it will be logged in your Ali Cloud log server:
```shell
curl -i http://127.0.0.1:9080/hello
```
Now if you check your Ali Cloud log server, you will be able to see the logs:
![sls logger view](../../../assets/images/plugin/sls-logger-1.png "sls logger view")
## Delete Plugin
To remove the `sls-logger` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,212 @@
---
title: splunk-hec-logging
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Splunk HTTP Event Collector
- splunk-hec-logging
description: This document contains information about the Apache APISIX splunk-hec-logging Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `splunk-hec-logging` Plugin is used to forward logs to [Splunk HTTP Event Collector (HEC)](https://docs.splunk.com/Documentation/Splunk/8.2.6/Data/UsetheHTTPEventCollector) for analysis and storage.
When the Plugin is enabled, APISIX will serialize the request context information to [Splunk Event Data format](https://docs.splunk.com/Documentation/Splunk/latest/Data/FormateventsforHTTPEventCollector#Event_metadata) and submit it to the batch queue. When the maximum batch size is exceeded, the data in the queue is pushed to Splunk HEC. See [batch processor](../batch-processor.md) for more details.
## Attributes
| Name | Required | Default | Description |
|------------------|----------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| endpoint | True | | Splunk HEC endpoint configurations. |
| endpoint.uri | True | | Splunk HEC event collector API endpoint. |
| endpoint.token | True | | Splunk HEC authentication token. |
| endpoint.channel | False | | Splunk HEC send data channel identifier. Read more: [About HTTP Event Collector Indexer Acknowledgment](https://docs.splunk.com/Documentation/Splunk/8.2.3/Data/AboutHECIDXAck). |
| endpoint.timeout | False | 10 | Splunk HEC send data timeout in seconds. |
| endpoint.keepalive_timeout | False | 60000 | Keepalive timeout in milliseconds. |
| ssl_verify | False | true | When set to `true` enables SSL verification as per [OpenResty docs](https://github.com/openresty/lua-nginx-module#tcpsocksslhandshake). |
| log_format | False | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
### Example of default log format
```json
{
"sourcetype": "_json",
"time": 1704513555.392,
"event": {
"upstream": "127.0.0.1:1980",
"request_url": "http://localhost:1984/hello",
"request_query": {},
"request_size": 59,
"response_headers": {
"content-length": "12",
"server": "APISIX/3.7.0",
"content-type": "text/plain",
"connection": "close"
},
"response_status": 200,
"response_size": 118,
"latency": 108.00004005432,
"request_method": "GET",
"request_headers": {
"connection": "close",
"host": "localhost"
}
},
"source": "apache-apisix-splunk-hec-logging",
"host": "localhost"
}
```
## Metadata
You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
| Name | Type | Required | Default | Description |
| ---------- | ------ | -------- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log_format | object | False | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
:::info IMPORTANT
Configuring the Plugin metadata is global in scope. This means that it will take effect on all Routes and Services which use the `splunk-hec-logging` Plugin.
:::
The example below shows how you can configure through the Admin API:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/splunk-hec-logging -H "X-API-KEY: $admin_key" -X PUT -d '
{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr"
}
}'
```
With this configuration, your logs would be formatted as shown below:
```json
[{"time":1673976669.269,"source":"apache-apisix-splunk-hec-logging","event":{"host":"localhost","client_ip":"127.0.0.1","@timestamp":"2023-01-09T14:47:25+08:00","route_id":"1"},"host":"DESKTOP-2022Q8F-wsl","sourcetype":"_json"}]
```
## Enable Plugin
### Full configuration
The example below shows a complete configuration of the Plugin on a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins":{
"splunk-hec-logging":{
"endpoint":{
"uri":"http://127.0.0.1:8088/services/collector",
"token":"BD274822-96AA-4DA6-90EC-18940FB2414C",
"channel":"FE0ECFAD-13D5-401B-847D-77833BD77131",
"timeout":60
},
"buffer_duration":60,
"max_retry_count":0,
"retry_delay":1,
"inactive_timeout":2,
"batch_max_size":10
}
},
"upstream":{
"type":"roundrobin",
"nodes":{
"127.0.0.1:1980":1
}
},
"uri":"/splunk.do"
}'
```
### Minimal configuration
The example below shows a bare minimum configuration of the Plugin on a Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins":{
"splunk-hec-logging":{
"endpoint":{
"uri":"http://127.0.0.1:8088/services/collector",
"token":"BD274822-96AA-4DA6-90EC-18940FB2414C"
}
}
},
"upstream":{
"type":"roundrobin",
"nodes":{
"127.0.0.1:1980":1
}
},
"uri":"/splunk.do"
}'
```
## Example usage
Once you have configured the Route to use the Plugin, when you make a request to APISIX, it will be logged in your Splunk server:
```shell
curl -i http://127.0.0.1:9080/splunk.do?q=hello
```
You should be able to login and search these logs from your Splunk dashboard:
![splunk hec search view](../../../assets/images/plugin/splunk-hec-admin-en.png)
## Delete Plugin
To remove the `splunk-hec-logging` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,154 @@
---
title: syslog
keywords:
- Apache APISIX
- API Gateway
- Plugin
- Syslog
description: This document contains information about the Apache APISIX syslog Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `syslog` Plugin is used to push logs to a Syslog server.
Logs can be set as JSON objects.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|------------------|---------|----------|--------------|---------------|--------------------------------------------------------------------------------------------------------------------------|
| host | string | True | | | IP address or the hostname of the Syslog server. |
| port | integer | True | | | Target port of the Syslog server. |
| name | string | False | "sys logger" | | Identifier for the server. If you use Prometheus to monitor APISIX metrics, the name is exported in `apisix_batch_process_entries`. |
| timeout | integer | False | 3000 | [1, ...] | Timeout in ms for the upstream to send data. |
| tls | boolean | False | false | | When set to `true` performs TLS verification. |
| flush_limit | integer | False | 4096 | [1, ...] | Maximum size of the buffer (KB) and the current message before it is flushed and written to the server. |
| drop_limit | integer | False | 1048576 | | Maximum size of the buffer (KB) and the current message before the current message is dropped because of the size limit. |
| sock_type | string | False | "tcp" | ["tcp", "udp] | Transport layer protocol to use. |
| pool_size | integer | False | 5 | [5, ...] | Keep-alive pool size used by `sock:keepalive`. |
| log_format | object | False | | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
| include_req_body | boolean | False | false | [false, true] | When set to `true` includes the request body in the log. |
| include_req_body_expr | array | False | | | Filter for when the `include_req_body` attribute is set to `true`. Request body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
| include_resp_body | boolean | False | false | [false, true] | When set to `true` includes the response body in the log. |
| include_resp_body_expr | array | False | | | When the `include_resp_body` attribute is set to `true`, use this to filter based on [lua-resty-expr](https://github.com/api7/lua-resty-expr). If present, only logs the response if the expression evaluates to `true`. |
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
### meta_format example
```text
"<46>1 2024-01-06T02:30:59.145Z 127.0.0.1 apisix 82324 - - {\"response\":{\"status\":200,\"size\":141,\"headers\":{\"content-type\":\"text/plain\",\"server\":\"APISIX/3.7.0\",\"transfer-encoding\":\"chunked\",\"connection\":\"close\"}},\"route_id\":\"1\",\"server\":{\"hostname\":\"baiyundeMacBook-Pro.local\",\"version\":\"3.7.0\"},\"request\":{\"uri\":\"/opentracing\",\"url\":\"http://127.0.0.1:1984/opentracing\",\"querystring\":{},\"method\":\"GET\",\"size\":155,\"headers\":{\"content-type\":\"application/x-www-form-urlencoded\",\"host\":\"127.0.0.1:1984\",\"user-agent\":\"lua-resty-http/0.16.1 (Lua) ngx_lua/10025\"}},\"upstream\":\"127.0.0.1:1982\",\"apisix_latency\":100.99999809265,\"service_id\":\"\",\"upstream_latency\":1,\"start_time\":1704508259044,\"client_ip\":\"127.0.0.1\",\"latency\":101.99999809265}\n"
```
## Metadata
You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
| Name | Type | Required | Default | Description |
| ---------- | ------ | -------- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log_format | object | False | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
:::info IMPORTANT
Configuring the Plugin metadata is global in scope. This means that it will take effect on all Routes and Services which use the `syslog` Plugin.
:::
The example below shows how you can configure through the Admin API:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/syslog -H "X-API-KEY: $admin_key" -X PUT -d '
{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr"
}
}'
```
With this configuration, your logs would be formatted as shown below:
```shell
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
```
## Enable Plugin
The example below shows how you can enable the Plugin for a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"syslog": {
"host" : "127.0.0.1",
"port" : 5044,
"flush_limit" : 1
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"uri": "/hello"
}'
```
## Example usage
Now, if you make a request to APISIX, it will be logged in your Syslog server:
```shell
curl -i http://127.0.0.1:9080/hello
```
## Delete Plugin
To remove the `syslog` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,189 @@
---
title: tcp-logger
keywords:
- Apache APISIX
- API Gateway
- Plugin
- TCP Logger
- tcp-logger
description: This document contains information about the Apache APISIX tcp-logger Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `tcp-logger` Plugin can be used to push log data requests to TCP servers.
This provides the ability to send log data requests as JSON objects to monitoring tools and other TCP servers.
This plugin also allows to push logs as a batch to your external TCP server. It might take some time to receive the log data. It will be automatically sent after the timer function in the [batch processor](../batch-processor.md) expires.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|------------------|---------|----------|---------|--------------|----------------------------------------------------------|
| host | string | True | | | IP address or the hostname of the TCP server. |
| port | integer | True | | [0,...] | Target upstream port. |
| timeout | integer | False | 1000 | [1,...] | Timeout for the upstream to send data. |
| log_format | object | False | | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
| tls | boolean | False | false | | When set to `true` performs SSL verification. |
| tls_options | string | False | | | TLS options. |
| include_req_body | boolean | False | false | [false, true] | When set to `true` includes the request body in the log. |
| include_req_body_expr | array | No | | | Filter for when the `include_req_body` attribute is set to `true`. Request body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
| include_resp_body | boolean | No | false | [false, true] | When set to `true` includes the response body in the log. |
| include_resp_body_expr | array | No | | | Filter for when the `include_resp_body` attribute is set to `true`. Response body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
### Example of default log format
```json
{
"response": {
"status": 200,
"headers": {
"server": "APISIX/3.7.0",
"content-type": "text/plain",
"content-length": "12",
"connection": "close"
},
"size": 118
},
"server": {
"version": "3.7.0",
"hostname": "localhost"
},
"start_time": 1704527628474,
"client_ip": "127.0.0.1",
"service_id": "",
"latency": 102.9999256134,
"apisix_latency": 100.9999256134,
"upstream_latency": 2,
"request": {
"headers": {
"connection": "close",
"host": "localhost"
},
"size": 59,
"method": "GET",
"uri": "/hello",
"url": "http://localhost:1984/hello",
"querystring": {}
},
"upstream": "127.0.0.1:1980",
"route_id": "1"
}
```
## Metadata
You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
| Name | Type | Required | Default | Description |
| ---------- | ------ | -------- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log_format | object | False | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
:::info IMPORTANT
Configuring the Plugin metadata is global in scope. This means that it will take effect on all Routes and Services which use the `tcp-logger` Plugin.
:::
The example below shows how you can configure through the Admin API:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/tcp-logger -H "X-API-KEY: $admin_key" -X PUT -d '
{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr"
}
}'
```
With this configuration, your logs would be formatted as shown below:
```json
{"@timestamp":"2023-01-09T14:47:25+08:00","route_id":"1","host":"localhost","client_ip":"127.0.0.1"}
```
## Enable Plugin
The example below shows how you can enable the `tcp-logger` Plugin on a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/5 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"tcp-logger": {
"host": "127.0.0.1",
"port": 5044,
"tls": false,
"batch_max_size": 1,
"name": "tcp logger"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"uri": "/hello"
}'
```
## Example usage
Now, if you make a request to APISIX, it will be logged in your TCP server:
```shell
curl -i http://127.0.0.1:9080/hello
```
## Delete Plugin
To remove the `tcp-logger` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,196 @@
---
title: tencent-cloud-cls
keywords:
- Apache APISIX
- API Gateway
- Plugin
- CLS
- Tencent Cloud
description: This document contains information about the Apache APISIX tencent-cloud-cls Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `tencent-cloud-cls` Plugin uses [TencentCloud CLS](https://cloud.tencent.com/document/product/614) API to forward APISIX logs to your topic.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
| ----------------- | ------- |----------|---------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| cls_host | string | Yes | | | CLS API hostplease refer [Uploading Structured Logs](https://www.tencentcloud.com/document/api/614/16873). |
| cls_topic | string | Yes | | | topic id of CLS. |
| secret_id | string | Yes | | | SecretId of your API key. |
| secret_key | string | Yes | | | SecretKey of your API key. |
| sample_ratio | number | No | 1 | [0.00001, 1] | How often to sample the requests. Setting to `1` will sample all requests. |
| include_req_body | boolean | No | false | [false, true] | When set to `true` includes the request body in the log. If the request body is too big to be kept in the memory, it can't be logged due to NGINX's limitations. |
| include_req_body_expr | array | No | | | Filter for when the `include_req_body` attribute is set to `true`. Request body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
| include_resp_body | boolean | No | false | [false, true] | When set to `true` includes the response body in the log. |
| include_resp_body_expr | array | No | | | Filter for when the `include_resp_body` attribute is set to `true`. Response body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
| global_tag | object | No | | | kv pairs in JSONsend with each log. |
| log_format | object | No | | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
NOTE: `encrypt_fields = {"secret_key"}` is also defined in the schema, which means that the field will be stored encrypted in etcd. See [encrypted storage fields](../plugin-develop.md#encrypted-storage-fields).
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
### Example of default log format
```json
{
"response": {
"headers": {
"content-type": "text/plain",
"connection": "close",
"server": "APISIX/3.7.0",
"transfer-encoding": "chunked"
},
"size": 136,
"status": 200
},
"route_id": "1",
"upstream": "127.0.0.1:1982",
"client_ip": "127.0.0.1",
"apisix_latency": 100.99985313416,
"service_id": "",
"latency": 103.99985313416,
"start_time": 1704525145772,
"server": {
"version": "3.7.0",
"hostname": "localhost"
},
"upstream_latency": 3,
"request": {
"headers": {
"connection": "close",
"host": "localhost"
},
"url": "http://localhost:1984/opentracing",
"querystring": {},
"method": "GET",
"size": 65,
"uri": "/opentracing"
}
}
```
## Metadata
You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
| Name | Type | Required | Default | Description |
| ---------- | ------ | -------- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log_format | object | False | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
:::info IMPORTANT
Configuring the Plugin metadata is global in scope. This means that it will take effect on all Routes and Services which use the `tencent-cloud-cls` Plugin.
:::
The example below shows how you can configure through the Admin API:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/tencent-cloud-cls \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr"
}
}'
```
With this configuration, your logs would be formatted as shown below:
```shell
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
```
## Enable Plugin
The example below shows how you can enable the Plugin on a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"tencent-cloud-cls": {
"cls_host": "ap-guangzhou.cls.tencentyun.com",
"cls_topic": "${your CLS topic name}",
"global_tag": {
"module": "cls-logger",
"server_name": "YourApiGateWay"
},
"include_req_body": true,
"include_resp_body": true,
"secret_id": "${your secret id}",
"secret_key": "${your secret key}"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"uri": "/hello"
}'
```
## Example usage
Now, if you make a request to APISIX, it will be logged in your cls topic:
```shell
curl -i http://127.0.0.1:9080/hello
```
## Delete Plugin
To disable this Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 \
-H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,637 @@
---
title: traffic-split
keywords:
- Apache APISIX
- API Gateway
- Traffic Split
- Blue-green Deployment
- Canary Deployment
description: The traffic-split Plugin directs traffic to various Upstream services based on conditions and/or weights. It provides a dynamic and flexible approach to implement release strategies and manage traffic.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/traffic-split" />
</head>
## Description
The `traffic-split` Plugin directs traffic to various Upstream services based on conditions and/or weights. It provides a dynamic and flexible approach to implement release strategies and manage traffic.
:::note
The traffic ratio between Upstream services may be less accurate since round robin algorithm is used to direct traffic (especially when the state is reset).
:::
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|--------------------------------|----------------|----------|------------|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| rules.match | array[object] | False | | | An array of one or more pairs of matching conditions and actions to be executed. |
| rules.match | array[object] | False | | | Rules to match for conditional traffic split. |
| rules.match.vars | array[array] | False | | | An array of one or more matching conditions in the form of [lua-resty-expr](https://github.com/api7/lua-resty-expr#operator-list) to conditionally execute the plugin. |
| rules.weighted_upstreams | array[object] | False | | | List of Upstream configurations. |
| rules.weighted_upstreams.upstream_id | string/integer | False | | | ID of the configured Upstream object. |
| rules.weighted_upstreams.weight | integer | False | weight = 1 | | Weight for each upstream. |
| rules.weighted_upstreams.upstream | object | False | | | Configuration of the upstream. Certain configuration options Upstream are not supported here. These fields are `service_name`, `discovery_type`, `checks`, `retries`, `retry_timeout`, `desc`, and `labels`. As a workaround, you can create an Upstream object and configure it in `upstream_id`. |
| rules.weighted_upstreams.upstream.type | array | False | roundrobin | [roundrobin, chash] | Algorithm for traffic splitting. `roundrobin` for weighted round robin and `chash` for consistent hashing. |
| rules.weighted_upstreams.upstream.hash_on | array | False | vars | | Used when `type` is `chash`. Support hashing on [NGINX variables](https://nginx.org/en/docs/varindex.html), headers, cookie, Consumer, or a combination of [NGINX variables](https://nginx.org/en/docs/varindex.html). |
| rules.weighted_upstreams.upstream.key | string | False | | | Used when `type` is `chash`. When `hash_on` is set to `header` or `cookie`, `key` is required. When `hash_on` is set to `consumer`, `key` is not required as the Consumer name will be used as the key automatically. |
| rules.weighted_upstreams.upstream.nodes | object | False | | | Addresses of the Upstream nodes. |
| rules.weighted_upstreams.upstream.timeout | object | False | 15 | | Timeout in seconds for connecting, sending and receiving messages. |
| rules.weighted_upstreams.upstream.pass_host | array | False | "pass" | ["pass", "node", "rewrite"] | Mode deciding how the host name is passed. `pass` passes the client's host name to the upstream. `node` passes the host configured in the node of the upstream. `rewrite` passes the value configured in `upstream_host`. |
| rules.weighted_upstreams.upstream.name | string | False | | | Identifier for the Upstream for specifying service name, usage scenarios, and so on. |
| rules.weighted_upstreams.upstream.upstream_host | string | False | | | Used when `pass_host` is `rewrite`. Host name of the upstream. |
## Examples
The examples below show different use cases for using the `traffic-split` Plugin.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Implement Canary Release
The following example demonstrates how to implement canary release with this Plugin.
A Canary release is a gradual deployment in which an increasing percentage of traffic is directed to a new release, allowing for a controlled and monitored rollout. This method ensures that any potential issues or bugs in the new release can be identified and addressed early on, before fully redirecting all traffic.
Create a Route and configure `traffic-split` Plugin with the following rules:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"uri": "/headers",
"id": "traffic-split-route",
"plugins": {
"traffic-split": {
"rules": [
{
"weighted_upstreams": [
{
"upstream": {
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"nodes": {
"httpbin.org:443":1
}
},
"weight": 3
},
{
"weight": 2
}
]
}
]
}
},
"upstream": {
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"nodes": {
"mock.api7.ai:443":1
}
}
}'
```
The proportion of traffic to each Upstream is determined by the weight of the Upstream relative to the total weight of all upstreams. Here, the total weight is calculated as: 3 + 2 = 5.
Therefore, 60% of the traffic are to be forwarded to `httpbin.org` and the other 40% of the traffic are to be forwarded to `mock.api7.ai`.
Send 10 consecutive requests to the Route to verify:
```shell
resp=$(seq 10 | xargs -I{} curl "http://127.0.0.1:9080/headers" -sL) && \
count_httpbin=$(echo "$resp" | grep "httpbin.org" | wc -l) && \
count_mockapi7=$(echo "$resp" | grep "mock.api7.ai" | wc -l) && \
echo httpbin.org: $count_httpbin, mock.api7.ai: $count_mockapi7
```
You should see a response similar to the following:
```text
httpbin.org: 6, mock.api7.ai: 4
```
Adjust the Upstream weights accordingly to complete the canary release.
### Implement Blue-Green Deployment
The following example demonstrates how to implement blue-green deployment with this Plugin.
Blue-green deployment is a deployment strategy that involves maintaining two identical environments: the _blue_ and the _green_. The blue environment refers to the current production deployment and the green environment refers to the new deployment. Once the green environment is tested to be ready for production, traffic will be routed to the green environment, making it the new production deployment.
Create a Route and configure `traffic-split` Plugin to execute the Plugin to redirect traffic only when the request contains a header `release: new_release`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"uri": "/headers",
"id": "traffic-split-route",
"plugins": {
"traffic-split": {
"rules": [
{
"match": [
{
"vars": [
["http_release","==","new_release"]
]
}
],
"weighted_upstreams": [
{
"upstream": {
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"nodes": {
"httpbin.org:443":1
}
}
}
]
}
]
}
},
"upstream": {
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"nodes": {
"mock.api7.ai:443":1
}
}
}'
```
Send a request to the Route with the `release` header:
```shell
curl "http://127.0.0.1:9080/headers" -H 'release: new_release'
```
You should see a response similar to the following:
```json
{
"headers": {
"Accept": "*/*",
"Host": "httpbin.org",
...
}
}
```
Send a request to the Route without any additional header:
```shell
curl "http://127.0.0.1:9080/headers"
```
You should see a response similar to the following:
```json
{
"headers": {
"accept": "*/*",
"host": "mock.api7.ai",
...
}
}
```
### Define Matching Condition for POST Request With APISIX Expressions
The following example demonstrates how to use [lua-resty-expr](https://github.com/api7/lua-resty-expr#operator-list) in rules to conditionally execute the Plugin when certain condition of a POST request is satisfied.
Create a Route and configure `traffic-split` Plugin with the following rules:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"uri": "/post",
"methods": ["POST"],
"id": "traffic-split-route",
"plugins": {
"traffic-split": {
"rules": [
{
"match": [
{
"vars": [
["post_arg_id", "==", "1"]
]
}
],
"weighted_upstreams": [
{
"upstream": {
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"nodes": {
"httpbin.org:443":1
}
}
}
]
}
]
}
},
"upstream": {
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"nodes": {
"mock.api7.ai:443":1
}
}
}'
```
Send a POST request with body `id=1`:
```shell
curl "http://127.0.0.1:9080/post" -X POST \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'id=1'
```
You should see a response similar to the following:
```json
{
"args": {},
"data": "",
"files": {},
"form": {
"id": "1"
},
"headers": {
"Accept": "*/*",
"Content-Length": "4",
"Content-Type": "application/x-www-form-urlencoded",
"Host": "httpbin.org",
...
},
...
}
```
Send a POST request without `id=1` in the body:
```shell
curl "http://127.0.0.1:9080/post" -X POST \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'random=string'
```
You should see that the request was forwarded to `mock.api7.ai`.
### Define AND Matching Conditions With APISIX Expressions
The following example demonstrates how to use [lua-resty-expr](https://github.com/api7/lua-resty-expr#operator-list) in rules to conditionally execute the Plugin when multiple conditions are satisfied.
Create a Route and configure `traffic-split` Plugin to redirect traffic only when all three conditions are satisfied:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"uri": "/headers",
"id": "traffic-split-route",
"plugins": {
"traffic-split": {
"rules": [
{
"match": [
{
"vars": [
["arg_name","==","jack"],
["http_user-id",">","23"],
["http_apisix-key","~~","[a-z]+"]
]
}
],
"weighted_upstreams": [
{
"upstream": {
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"nodes": {
"httpbin.org:443":1
}
},
"weight": 3
},
{
"weight": 2
}
]
}
]
}
},
"upstream": {
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"nodes": {
"mock.api7.ai:443":1
}
}
}'
```
If conditions are satisfied, 60% of the traffic should be directed to `httpbin.org` and the other 40% should be directed to `mock.api7.ai`. If conditions are not satisfied, all traffic should be directed to `mock.api7.ai`.
Send 10 consecutive requests that satisfy all conditions to verify:
```shell
resp=$(seq 10 | xargs -I{} curl "http://127.0.0.1:9080/headers?name=jack" -H 'user-id: 30' -H 'apisix-key: helloapisix' -sL) && \
count_httpbin=$(echo "$resp" | grep "httpbin.org" | wc -l) && \
count_mockapi7=$(echo "$resp" | grep "mock.api7.ai" | wc -l) && \
echo httpbin.org: $count_httpbin, mock.api7.ai: $count_mockapi7
```
You should see a response similar to the following:
```text
httpbin.org: 6, mock.api7.ai: 4
```
Send 10 consecutive requests that do not satisfy the conditions to verify:
```shell
resp=$(seq 10 | xargs -I{} curl "http://127.0.0.1:9080/headers?name=random" -sL) && \
count_httpbin=$(echo "$resp" | grep "httpbin.org" | wc -l) && \
count_mockapi7=$(echo "$resp" | grep "mock.api7.ai" | wc -l) && \
echo httpbin.org: $count_httpbin, mock.api7.ai: $count_mockapi7
```
You should see a response similar to the following:
```text
httpbin.org: 0, mock.api7.ai: 10
```
### Define OR Matching Conditions With APISIX Expressions
The following example demonstrates how to use [lua-resty-expr](https://github.com/api7/lua-resty-expr#operator-list) in rules to conditionally execute the Plugin when either set of the condition is satisfied.
Create a Route and configure `traffic-split` Plugin to redirect traffic when either set of the configured conditions are satisfied:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"uri": "/headers",
"id": "traffic-split-route",
"plugins": {
"traffic-split": {
"rules": [
{
"match": [
{
"vars": [
["arg_name","==","jack"],
["http_user-id",">","23"],
["http_apisix-key","~~","[a-z]+"]
]
},
{
"vars": [
["arg_name2","==","rose"],
["http_user-id2","!",">","33"],
["http_apisix-key2","~~","[a-z]+"]
]
}
],
"weighted_upstreams": [
{
"upstream": {
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"nodes": {
"httpbin.org:443":1
}
},
"weight": 3
},
{
"weight": 2
}
]
}
]
}
},
"upstream": {
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"nodes": {
"mock.api7.ai:443":1
}
}
}'
```
Alternatively, you can also use the OR operator in the [lua-resty-expr](https://github.com/api7/lua-resty-expr#operator-list) for these conditions.
If conditions are satisfied, 60% of the traffic should be directed to `httpbin.org` and the other 40% should be directed to `mock.api7.ai`. If conditions are not satisfied, all traffic should be directed to `mock.api7.ai`.
Send 10 consecutive requests that satisfy the second set of conditions to verify:
```shell
resp=$(seq 10 | xargs -I{} curl "http://127.0.0.1:9080/headers?name2=rose" -H 'user-id:30' -H 'apisix-key2: helloapisix' -sL) && \
count_httpbin=$(echo "$resp" | grep "httpbin.org" | wc -l) && \
count_mockapi7=$(echo "$resp" | grep "mock.api7.ai" | wc -l) && \
echo httpbin.org: $count_httpbin, mock.api7.ai: $count_mockapi7
```
You should see a response similar to the following:
```json
httpbin.org: 6, mock.api7.ai: 4
```
Send 10 consecutive requests that do not satisfy any set of conditions to verify:
```shell
resp=$(seq 10 | xargs -I{} curl "http://127.0.0.1:9080/headers?name=random" -sL) && \
count_httpbin=$(echo "$resp" | grep "httpbin.org" | wc -l) && \
count_mockapi7=$(echo "$resp" | grep "mock.api7.ai" | wc -l) && \
echo httpbin.org: $count_httpbin, mock.api7.ai: $count_mockapi7
```
You should see a response similar to the following:
```json
httpbin.org: 0, mock.api7.ai: 10
```
### Configure Different Rules for Different Upstreams
The following example demonstrates how to set one-to-one mapping between rule sets and upstreams.
Create a Route and configure `traffic-split` Plugin with the following matching rules to redirect traffic when the request contains a header `x-api-id: 1` or `x-api-id: 2`, to the corresponding Upstream service:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"uri": "/headers",
"id": "traffic-split-route",
"plugins": {
"traffic-split": {
"rules": [
{
"match": [
{
"vars": [
["http_x-api-id","==","1"]
]
}
],
"weighted_upstreams": [
{
"upstream": {
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"nodes": {
"httpbin.org:443":1
}
},
"weight": 1
}
]
},
{
"match": [
{
"vars": [
["http_x-api-id","==","2"]
]
}
],
"weighted_upstreams": [
{
"upstream": {
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"nodes": {
"mock.api7.ai:443":1
}
},
"weight": 1
}
]
}
]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"postman-echo.com:443": 1
},
"scheme": "https",
"pass_host": "node"
}
}'
```
Send a request with header `x-api-id: 1`:
```shell
curl "http://127.0.0.1:9080/headers" -H 'x-api-id: 1'
```
You should see an `HTTP/1.1 200 OK` response similar to the following:
```json
{
"headers": {
"Accept": "*/*",
"Host": "httpbin.org",
...
}
}
```
Send a request with header `x-api-id: 2`:
```shell
curl "http://127.0.0.1:9080/headers" -H 'x-api-id: 2'
```
You should see an `HTTP/1.1 200 OK` response similar to the following:
```json
{
"headers": {
"accept": "*/*",
"host": "mock.api7.ai",
...
}
}
```
Send a request without any additional header:
```shell
curl "http://127.0.0.1:9080/headers"
```
You should see a response similar to the following:
```json
{
"headers": {
"accept": "*/*",
"host": "postman-echo.com",
...
}
}
```

View File

@@ -0,0 +1,159 @@
---
title: ua-restriction
keywords:
- Apache APISIX
- API Gateway
- UA restriction
description: The ua-restriction Plugin restricts access to upstream resources using an allowlist or denylist of user agents, preventing overload from web crawlers and enhancing API security.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/ua-restriction" />
</head>
## Description
The `ua-restriction` Plugin supports restricting access to upstream resources through either configuring an allowlist or denylist of user agents. A common use case is to prevent web crawlers from overloading the upstream resources and causing service degradation.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|----------------|---------------|----------|--------------|-------------------------|---------------------------------------------------------------------------------|
| bypass_missing | boolean | False | false | | If true, bypass the user agent restriction check when the `User-Agent` header is missing. |
| allowlist | array[string] | False | | | List of user agents to allow. Support regular expressions. At least one of the `allowlist` and `denylist` should be configured, but they cannot be configured at the same time. |
| denylist | array[string] | False | | | List of user agents to deny. Support regular expressions. At least one of the `allowlist` and `denylist` should be configured, but they cannot be configured at the same time. |
| message | string | False | "Not allowed" | | Message returned when the user agent is denied access. |
## Examples
The examples below demonstrate how you can configure `ua-restriction` for different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Reject Web Crawlers and Customize Error Message
The following example demonstrates how you can configure the Plugin to fend off unwanted web crawlers and customize the rejection message.
Create a Route and configure the Plugin to block specific crawlers from accessing resources with a customized message:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "ua-restriction-route",
"uri": "/anything",
"plugins": {
"ua-restriction": {
"bypass_missing": false,
"denylist": [
"(Baiduspider)/(\\d+)\\.(\\d+)",
"bad-bot-1"
],
"message": "Access denied"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should receive an `HTTP/1.1 200 OK` response.
Send another request to the Route with a disallowed user agent:
```shell
curl -i "http://127.0.0.1:9080/anything" -H 'User-Agent: Baiduspider/5.0'
```
You should receive an `HTTP/1.1 403 Forbidden` response with the following message:
```text
{"message":"Access denied"}
```
### Bypass UA Restriction Checks
The following example demonstrates how to configure the Plugin to allow requests of a specific user agent to bypass the UA restriction.
Create a Route as such:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "ua-restriction-route",
"uri": "/anything",
"plugins": {
"ua-restriction": {
"bypass_missing": true,
"allowlist": [
"good-bot-1"
],
"message": "Access denied"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
```
Send a request to the Route without modifying the user agent:
```shell
curl -i "http://127.0.0.1:9080/anything"
```
You should receive an `HTTP/1.1 403 Forbidden` response with the following message:
```text
{"message":"Access denied"}
```
Send another request to the Route with an empty user agent:
```shell
curl -i "http://127.0.0.1:9080/anything" -H 'User-Agent: '
```
You should receive an `HTTP/1.1 200 OK` response.

View File

@@ -0,0 +1,186 @@
---
title: udp-logger
keywords:
- Apache APISIX
- API Gateway
- Plugin
- UDP Logger
description: This document contains information about the Apache APISIX udp-logger Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `udp-logger` Plugin can be used to push log data requests to UDP servers.
This provides the ability to send log data requests as JSON objects to monitoring tools and other UDP servers.
This plugin also allows to push logs as a batch to your external UDP server. It might take some time to receive the log data. It will be automatically sent after the timer function in the [batch processor](../batch-processor.md) expires.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|------------------|---------|----------|--------------|--------------|----------------------------------------------------------|
| host | string | True | | | IP address or the hostname of the UDP server. |
| port | integer | True | | [0,...] | Target upstream port. |
| timeout | integer | False | 3 | [1,...] | Timeout for the upstream to send data. |
| log_format | object | False | | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
| name | string | False | "udp logger" | | Unique identifier for the batch processor. If you use Prometheus to monitor APISIX metrics, the name is exported in `apisix_batch_process_entries`. processor. |
| include_req_body | boolean | False | false | [false, true] | When set to `true` includes the request body in the log. |
| include_req_body_expr | array | No | | | Filter for when the `include_req_body` attribute is set to `true`. Request body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
| include_resp_body | boolean | No | false | [false, true] | When set to `true` includes the response body in the log. |
| include_resp_body_expr | array | No | | | Filter for when the `include_resp_body` attribute is set to `true`. Response body is only logged when the expression set here evaluates to `true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.
### Example of default log format
```json
{
"apisix_latency": 99.999988555908,
"service_id": "",
"server": {
"version": "3.7.0",
"hostname": "localhost"
},
"request": {
"method": "GET",
"headers": {
"connection": "close",
"host": "localhost"
},
"url": "http://localhost:1984/opentracing",
"size": 65,
"querystring": {},
"uri": "/opentracing"
},
"start_time": 1704527399740,
"client_ip": "127.0.0.1",
"response": {
"status": 200,
"size": 136,
"headers": {
"server": "APISIX/3.7.0",
"content-type": "text/plain",
"transfer-encoding": "chunked",
"connection": "close"
}
},
"upstream": "127.0.0.1:1982",
"route_id": "1",
"upstream_latency": 12,
"latency": 111.99998855591
}
```
## Metadata
You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
| Name | Type | Required | Default | Description |
| ---------- | ------ | -------- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log_format | object | False | | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](../apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. |
:::info IMPORTANT
Configuring the Plugin metadata is global in scope. This means that it will take effect on all Routes and Services which use the `udp-logger` Plugin.
:::
The example below shows how you can configure through the Admin API:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/udp-logger -H "X-API-KEY: $admin_key" -X PUT -d '
{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr"
}
}'
```
With this configuration, your logs would be formatted as shown below:
```json
{"@timestamp":"2023-01-09T14:47:25+08:00","route_id":"1","host":"localhost","client_ip":"127.0.0.1"}
```
## Enable Plugin
The example below shows how you can enable the Plugin on a specific Route:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/5 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"plugins": {
"udp-logger": {
"host": "127.0.0.1",
"port": 3000,
"batch_max_size": 1,
"name": "udp logger"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
},
"uri": "/hello"
}'
```
## Example usage
Now, if you make a request to APISIX, it will be logged in your UDP server:
```shell
curl -i http://127.0.0.1:9080/hello
```
## Delete Plugin
To remove the `udp-logger` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,120 @@
---
title: uri-blocker
keywords:
- Apache APISIX
- API Gateway
- URI Blocker
description: This document contains information about the Apache APISIX uri-blocker Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `uri-blocker` Plugin intercepts user requests with a set of `block_rules`.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
|------------------|---------------|----------|---------|--------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| block_rules | array[string] | True | | | List of regex filter rules. If the request URI hits any one of the rules, the response code is set to the `rejected_code` and the user request is terminated. For example, `["root.exe", "root.m+"]`. |
| rejected_code | integer | False | 403 | [200, ...] | HTTP status code returned when the request URI hits any of the `block_rules`. |
| rejected_msg | string | False | | non-empty | HTTP response body returned when the request URI hits any of the `block_rules`. |
| case_insensitive | boolean | False | false | | When set to `true`, ignores the case when matching request URI. |
## Enable Plugin
The example below enables the `uri-blocker` Plugin on a specific Route:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/*",
"plugins": {
"uri-blocker": {
"block_rules": ["root.exe", "root.m+"]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```
## Example usage
Once you have configured the Plugin as shown above, you can try accessing the file:
```shell
curl -i http://127.0.0.1:9080/root.exe?a=a
```
```shell
HTTP/1.1 403 Forbidden
Date: Wed, 17 Jun 2020 13:55:41 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 150
Connection: keep-alive
Server: APISIX web server
... ...
```
You can also set a `rejected_msg` and it will be added to the response body:
```shell
HTTP/1.1 403 Forbidden
Date: Wed, 17 Jun 2020 13:55:41 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 150
Connection: keep-alive
Server: APISIX web server
{"error_msg":"access is not allowed"}
```
## Delete Plugin
To remove the `uri-blocker` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/*",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'
```

View File

@@ -0,0 +1,296 @@
---
title: wolf-rbac
keywords:
- Apache APISIX
- API Gateway
- Plugin
- wolf RBAC
- wolf-rbac
description: This document contains information about the Apache APISIX wolf-rbac Plugin.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
## Description
The `wolf-rbac` Plugin provides a [role-based access control](https://en.wikipedia.org/wiki/Role-based_access_control) system with [wolf](https://github.com/iGeeky/wolf) to a Route or a Service. This Plugin can be used with a [Consumer](../terminology/consumer.md).
## Attributes
| Name | Type | Required | Default | Description |
|---------------|--------|----------|--------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| server | string | False | "http://127.0.0.1:12180" | Service address of wolf server. |
| appid | string | False | "unset" | App id added in wolf console. This field supports saving the value in Secret Manager using the [APISIX Secret](../terminology/secret.md) resource. |
| header_prefix | string | False | "X-" | Prefix for a custom HTTP header. After authentication is successful, three headers will be added to the request header (for backend) and response header (for frontend) namely: `X-UserId`, `X-Username`, and `X-Nickname`. |
## API
This Plugin will add the following endpoints when enabled:
- `/apisix/plugin/wolf-rbac/login`
- `/apisix/plugin/wolf-rbac/change_pwd`
- `/apisix/plugin/wolf-rbac/user_info`
:::note
You may need to use the [public-api](public-api.md) Plugin to expose this endpoint.
:::
## Pre-requisites
To use this Plugin, you have to first [install wolf](https://github.com/iGeeky/wolf/blob/master/quick-start-with-docker/README.md) and start it.
Once you have done that you need to add `application`, `admin`, `normal user`, `permission`, `resource` and user authorize to the [wolf-console](https://github.com/iGeeky/wolf/blob/master/docs/usage.md).
## Enable Plugin
You need to first configure the Plugin on a Consumer:
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
```shell
curl http://127.0.0.1:9180/apisix/admin/consumers -H "X-API-KEY: $admin_key" -X PUT -d '
{
"username":"wolf_rbac",
"plugins":{
"wolf-rbac":{
"server":"http://127.0.0.1:12180",
"appid":"restful"
}
},
"desc":"wolf-rbac"
}'
```
:::note
The `appid` added in the configuration should already exist in wolf.
:::
You can now add the Plugin to a Route or a Service:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/*",
"plugins": {
"wolf-rbac": {}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"www.baidu.com:80": 1
}
}
}'
```
You can also use the [APISIX Dashboard](/docs/dashboard/USER_GUIDE) to complete the operation through a web UI.
<!--
![add a consumer](https://raw.githubusercontent.com/apache/apisix/master/docs/assets/images/plugin/wolf-rbac-1.png)
![enable wolf-rbac plugin](https://raw.githubusercontent.com/apache/apisix/master/docs/assets/images/plugin/wolf-rbac-2.png)
-->
## Example usage
You can use the `public-api` Plugin to expose the API:
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/wal -H "X-API-KEY: $admin_key" -X PUT -d '
{
"uri": "/apisix/plugin/wolf-rbac/login",
"plugins": {
"public-api": {}
}
}'
```
Similarly, you can setup the Routes for `change_pwd` and `user_info`.
You can now login and get a wolf `rbac_token`:
```shell
curl http://127.0.0.1:9080/apisix/plugin/wolf-rbac/login -i \
-H "Content-Type: application/json" \
-d '{"appid": "restful", "username":"test", "password":"user-password", "authType":1}'
```
```shell
HTTP/1.1 200 OK
Date: Wed, 24 Jul 2019 10:33:31 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Server: APISIX web server
{"rbac_token":"V1#restful#eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6NzQ5LCJ1c2VybmFtZSI6InRlc3QiLCJtYW5hZ2VyIjoiIiwiYXBwaWQiOiJyZXN0ZnVsIiwiaWF0IjoxNTc5NDQ5ODQxLCJleHAiOjE1ODAwNTQ2NDF9.n2-830zbhrEh6OAxn4K_yYtg5pqfmjpZAjoQXgtcuts","user_info":{"nickname":"test","username":"test","id":"749"}}
```
:::note
The `appid`, `username`, and `password` must be configured in the wolf system.
`authType` is the authentication type—1 for password authentication (default) and 2 for LDAP authentication (v0.5.0+).
:::
You can also make a post request with `x-www-form-urlencoded` instead of JSON:
```shell
curl http://127.0.0.1:9080/apisix/plugin/wolf-rbac/login -i \
-H "Content-Type: application/x-www-form-urlencoded" \
-d 'appid=restful&username=test&password=user-password'
```
Now you can test the Route:
- without token:
```shell
curl http://127.0.0.1:9080/ -H"Host: www.baidu.com" -i
```
```
HTTP/1.1 401 Unauthorized
...
{"message":"Missing rbac token in request"}
```
- with token in `Authorization` header:
```shell
curl http://127.0.0.1:9080/ -H"Host: www.baidu.com" \
-H 'Authorization: V1#restful#eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6NzQ5LCJ1c2VybmFtZSI6InRlc3QiLCJtYW5hZ2VyIjoiIiwiYXBwaWQiOiJyZXN0ZnVsIiwiaWF0IjoxNTc5NDQ5ODQxLCJleHAiOjE1ODAwNTQ2NDF9.n2-830zbhrEh6OAxn4K_yYtg5pqfmjpZAjoQXgtcuts' -i
```
```shell
HTTP/1.1 200 OK
<!DOCTYPE html>
```
- with token in `x-rbac-token` header:
```shell
curl http://127.0.0.1:9080/ -H"Host: www.baidu.com" \
-H 'x-rbac-token: V1#restful#eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6NzQ5LCJ1c2VybmFtZSI6InRlc3QiLCJtYW5hZ2VyIjoiIiwiYXBwaWQiOiJyZXN0ZnVsIiwiaWF0IjoxNTc5NDQ5ODQxLCJleHAiOjE1ODAwNTQ2NDF9.n2-830zbhrEh6OAxn4K_yYtg5pqfmjpZAjoQXgtcuts' -i
```
```shell
HTTP/1.1 200 OK
<!DOCTYPE html>
```
- with token in request parameters:
```shell
curl 'http://127.0.0.1:9080?rbac_token=V1%23restful%23eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6NzQ5LCJ1c2VybmFtZSI6InRlc3QiLCJtYW5hZ2VyIjoiIiwiYXBwaWQiOiJyZXN0ZnVsIiwiaWF0IjoxNTc5NDQ5ODQxLCJleHAiOjE1ODAwNTQ2NDF9.n2-830zbhrEh6OAxn4K_yYtg5pqfmjpZAjoQXgtcuts' -H"Host: www.baidu.com" -i
```
```shell
HTTP/1.1 200 OK
<!DOCTYPE html>
```
- with token in cookie:
```shell
curl http://127.0.0.1:9080 -H"Host: www.baidu.com" \
--cookie x-rbac-token=V1#restful#eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6NzQ5LCJ1c2VybmFtZSI6InRlc3QiLCJtYW5hZ2VyIjoiIiwiYXBwaWQiOiJyZXN0ZnVsIiwiaWF0IjoxNTc5NDQ5ODQxLCJleHAiOjE1ODAwNTQ2NDF9.n2-830zbhrEh6OAxn4K_yYtg5pqfmjpZAjoQXgtcuts -i
```
```
HTTP/1.1 200 OK
<!DOCTYPE html>
```
And to get a user information:
```shell
curl http://127.0.0.1:9080/apisix/plugin/wolf-rbac/user_info \
--cookie x-rbac-token=V1#restful#eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6NzQ5LCJ1c2VybmFtZSI6InRlc3QiLCJtYW5hZ2VyIjoiIiwiYXBwaWQiOiJyZXN0ZnVsIiwiaWF0IjoxNTc5NDQ5ODQxLCJleHAiOjE1ODAwNTQ2NDF9.n2-830zbhrEh6OAxn4K_yYtg5pqfmjpZAjoQXgtcuts -i
```
```shell
HTTP/1.1 200 OK
{
"user_info":{
"nickname":"test",
"lastLogin":1582816780,
"id":749,
"username":"test",
"appIDs":["restful"],
"manager":"none",
"permissions":{"USER_LIST":true},
"profile":null,
"roles":{},
"createTime":1578820506,
"email":""
}
}
```
And to change a user's password:
```shell
curl http://127.0.0.1:9080/apisix/plugin/wolf-rbac/change_pwd \
-H "Content-Type: application/json" \
--cookie x-rbac-token=V1#restful#eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6NzQ5LCJ1c2VybmFtZSI6InRlc3QiLCJtYW5hZ2VyIjoiIiwiYXBwaWQiOiJyZXN0ZnVsIiwiaWF0IjoxNTc5NDQ5ODQxLCJleHAiOjE1ODAwNTQ2NDF9.n2-830zbhrEh6OAxn4K_yYtg5pqfmjpZAjoQXgtcuts -i \
-X PUT -d '{"oldPassword": "old password", "newPassword": "new password"}'
```
```shell
HTTP/1.1 200 OK
{"message":"success to change password"}
```
## Delete Plugin
To remove the `wolf-rbac` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
```shell
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $admin_key" -X PUT -d '
{
"methods": ["GET"],
"uri": "/*",
"plugins": {
},
"upstream": {
"type": "roundrobin",
"nodes": {
"www.baidu.com:80": 1
}
}
}'
```

View File

@@ -0,0 +1,386 @@
---
title: workflow
keywords:
- Apache APISIX
- API Gateway
- Plugin
- workflow
- traffic control
description: The workflow Plugin supports the conditional execution of user-defined actions to client traffic based a given set of rules. This provides a granular approach to implement complex traffic management.
---
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
<head>
<link rel="canonical" href="https://docs.api7.ai/hub/workflow" />
</head>
## Description
The `workflow` Plugin supports the conditional execution of user-defined actions to client traffic based a given set of rules, defined using [lua-resty-expr](https://github.com/api7/lua-resty-expr#operator-list). This provides a granular approach to traffic management.
## Attributes
| Name | Type | Required | Default | Valid values | Description |
| ---------------------------- | ------------- | -------- | ------- | ------------ | ------------------------------------------------------------ |
| rules | array[object] | True | | | An array of one or more pairs of matching conditions and actions to be executed. |
| rules.case | array[array] | False | | | An array of one or more matching conditions in the form of [lua-resty-expr](https://github.com/api7/lua-resty-expr#operator-list). For example, `{"arg_name", "==", "json"}`. |
| rules.actions | array[object] | True | | | An array of actions to be executed when a condition is successfully matched. Currently, the array only supports one action, and it should be either `return`, or `limit-count`. When the action is configured to be `return`, you can configure an HTTP status code to return to the client when the condition is matched. When the action is configured to be `limit-count`, you can configure all options of the [`limit-count`](./limit-count.md) plugin, except for `group`. |
## Examples
The examples below demonstrates how you can use the `workflow` Plugin for different scenarios.
:::note
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
```bash
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
```
:::
### Return Response HTTP Status Code Conditionally
The following example demonstrates a simple rule with one matching condition and one associated action to return HTTP status code conditionally.
Create a Route with the `workflow` Plugin to return HTTP status code 403 when the request's URI path is `/anything/rejected`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "workflow-route",
"uri": "/anything/*",
"plugins": {
"workflow":{
"rules":[
{
"case":[
["uri", "==", "/anything/rejected"]
],
"actions":[
[
"return",
{"code": 403}
]
]
}
]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
```
Send a request that matches none of the rules:
```shell
curl -i "http://127.0.0.1:9080/anything/anything"
```
You should receive an `HTTP/1.1 200 OK` response.
Send a request that matches the configured rule:
```shell
curl -i "http://127.0.0.1:9080/anything/rejected"
```
You should receive an `HTTP/1.1 403 Forbidden` response of following:
```text
{"error_msg":"rejected by workflow"}
```
### Apply Rate Limiting Conditionally by URI and Query Parameter
The following example demonstrates a rule with two matching conditions and one associated action to rate limit requests conditionally.
Create a Route with the `workflow` Plugin to apply rate limiting when the URI path is `/anything/rate-limit` and the query parameter `env` value is `v1`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "workflow-route",
"uri": "/anything/*",
"plugins":{
"workflow":{
"rules":[
{
"case":[
["uri", "==", "/anything/rate-limit"],
["arg_env", "==", "v1"]
],
"actions":[
[
"limit-count",
{
"count":1,
"time_window":60,
"rejected_code":429
}
]
]
}
]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
```
Generate two consecutive requests that matches the second rule:
```shell
curl -i "http://127.0.0.1:9080/anything/rate-limit?env=v1"
```
You should receive an `HTTP/1.1 200 OK` response and an `HTTP 429 Too Many Requests` response.
Generate requests that do not match the condition:
```shell
curl -i "http://127.0.0.1:9080/anything/anything?env=v1"
```
You should receive `HTTP/1.1 200 OK` responses for all requests, as they are not rate limited.
### Apply Rate Limiting Conditionally by Consumers
The following example demonstrates how to configure the Plugin to perform rate limiting based on the following specifications:
* Consumer `john` should have a quota of 5 requests within a 30-second window
* Consumer `jane` should have a quota of 3 requests within a 30-second window
* All other consumers should have a quota of 2 requests within a 30-second window
While this example will be using [`key-auth`](./key-auth.md), you can easily replace it with other authentication Plugins.
Create a Consumer `john`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "john"
}'
```
Create `key-auth` credential for the consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/john/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-john-key-auth",
"plugins": {
"key-auth": {
"key": "john-key"
}
}
}'
```
Create a second Consumer `jane`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jane"
}'
```
Create `key-auth` credential for the consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jane/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jane-key-auth",
"plugins": {
"key-auth": {
"key": "jane-key"
}
}
}'
```
Create a third Consumer `jimmy`:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"username": "jimmy"
}'
```
Create `key-auth` credential for the consumer:
```shell
curl "http://127.0.0.1:9180/apisix/admin/consumers/jimmy/credentials" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "cred-jimmy-key-auth",
"plugins": {
"key-auth": {
"key": "jimmy-key"
}
}
}'
```
Create a Route with the `workflow` and `key-auth` Plugins, with the desired rate limiting rules:
```shell
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${admin_key}" \
-d '{
"id": "workflow-route",
"uri": "/anything",
"plugins":{
"key-auth": {},
"workflow":{
"rules":[
{
"actions": [
[
"limit-count",
{
"count": 5,
"key": "consumer_john",
"key_type": "constant",
"rejected_code": 429,
"time_window": 30
}
]
],
"case": [
[
"consumer_name",
"==",
"john"
]
]
},
{
"actions": [
[
"limit-count",
{
"count": 3,
"key": "consumer_jane",
"key_type": "constant",
"rejected_code": 429,
"time_window": 30
}
]
],
"case": [
[
"consumer_name",
"==",
"jane"
]
]
},
{
"actions": [
[
"limit-count",
{
"count": 2,
"key": "$consumer_name",
"key_type": "var",
"rejected_code": 429,
"time_window": 30
}
]
]
}
]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
```
To verify, send 6 consecutive requests with `john`'s key:
```shell
resp=$(seq 6 | xargs -I{} curl "http://127.0.0.1:9080/anything" -H 'apikey: john-key' -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200": $count_200, "429": $count_429
```
You should see the following response, showing that out of the 6 requests, 5 requests were successful (status code 200) while the others were rejected (status code 429).
```text
200: 5, 429: 1
```
Send 6 consecutive requests with `jane`'s key:
```shell
resp=$(seq 6 | xargs -I{} curl "http://127.0.0.1:9080/anything" -H 'apikey: jane-key' -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200": $count_200, "429": $count_429
```
You should see the following response, showing that out of the 6 requests, 3 requests were successful (status code 200) while the others were rejected (status code 429).
```text
200: 3, 429: 3
```
Send 3 consecutive requests with `jimmy`'s key:
```shell
resp=$(seq 3 | xargs -I{} curl "http://127.0.0.1:9080/anything" -H 'apikey: jimmy-key' -o /dev/null -s -w "%{http_code}\n") && \
count_200=$(echo "$resp" | grep "200" | wc -l) && \
count_429=$(echo "$resp" | grep "429" | wc -l) && \
echo "200": $count_200, "429": $count_429
```
You should see the following response, showing that out of the 3 requests, 2 requests were successful (status code 200) while the others were rejected (status code 429).
```text
200: 2, 429: 1
```

Some files were not shown because too many files have changed in this diff Show More