Skip to main content

Standard Analyzer

标准分词器(默认):它提供基于语法的标记化(基于Unicode标准规定的Unicode文本分割算法),适用于大多数语言。

示例

POST _analyze
{
"analyzer": "standard",
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}

分词结果

[ the, 2, quick, brown, foxes, jumped, over, the, lazy, dog's, bone ]

配置

标准分词器接受以下参数:

  • max_token_length 最大token长度。如果看到超过此长度的token,则以max_token_length间隔对其进行拆分。默认值为255。
  • stopwords 预定义的停用词列表,如english或包含停止词列表的数组。默认为none

配置示例

PUT my-index-000001
{
"settings": {
"analysis": {
"analyzer": {
"my_english_analyzer": {
"type": "standard",
"max_token_length": 5,
"stopwords": "_english_"
}
}
}
}
}

POST my-index-000001/_analyze
{
"analyzer": "my_english_analyzer",
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}

上述示例产生以下分词结果:

[ 2, quick, brown, foxes, jumpe, d, over, lazy, dog's, bone ]

定义

Standard Analyzer 包括

  • Tokenizer
    • Standard Tokenizer
  • Token Filters
    • Lower Case Token Filter
    • Stop Token Filter (默认关闭)

自定义标准分析器,通常通过添加filter的方式增加自定义行为:

PUT /standard_example
{
"settings": {
"analysis": {
"analyzer": {
"rebuilt_standard": {
"tokenizer": "standard",
"filter": [
"lowercase"
]
}
}
}
}
}