
下面会简单介绍一些关于es结合SpringBoot使用的案例,更多详情介绍应该去官网看看: https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high-supported-apis.html
1. 配置环境依赖 1.1 查看一下当前使用的es版本比如我演示使用的es版本为: 7.15.1
org.elasticsearch.client elasticsearch-rest-high-level-client7.15.1
配置属性:
spring:
elasticsearch:
rest:
uris: http://192.168.72.143:9200
这里是因为我是单机模式,9200端口,但是如果是集群,就是9300端口,为了演示方便直接这样配置,但是一般公司使用的是集群,就需要配置集群的配置,或者在es的bean里面做文章都可以
2. 索引库 *** 作其实会使用kibana进行命令 *** 作,在代码层面也是一样的,只不过命令换成了方法和一些类;下午结合kibana命令对应java代码进行展示
2.1 创建索引库- 在kibana当中,创建索引库是这样 *** 作
PUT wang_index_01
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1
}
}
- 对应java代码:
// 创建索引
CreateIndexRequest indexRequest = new CreateIndexRequest ("wang_index_01");
//分片参数
indexRequest.settings(Settings.builder()
//分片数
.put("index.number_of_shards", 1)
// 副本数
.put("index.number_of_replicas", 1)
);
// 创建索引 *** 作客户端
IndicesClient indices = client.indices();
// 创建响应结果
CreateIndexResponse createIndexResponse = indices.create(indexRequest, RequestOptions.DEFAULT);
//获取响应值
boolean acknowledged = createIndexResponse.isAcknowledged();
System.out.println("acknowledged = " + acknowledged);
2.2 查询索引库
- 在kibana当中,查询索引库是这样 *** 作
GET wang_index_01
- 对应java代码:
GetIndexRequest getIndexRequest = new GetIndexRequest();
getIndexRequest.indices("wang_index_01");
GetIndexResponse getIndexResponse = client.indices().get(getIndexRequest, RequestOptions.DEFAULT);
System.out.println("getIndexResponse = " + getIndexResponse);
2.3 删除索引库
- 在kibana当中,删除索引库是这样 *** 作
DELETe wang_index_01
- 对应java代码:
DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest("wang_index_01");
AcknowledgedResponse delete = client.indices().delete(deleteIndexRequest, RequestOptions.DEFAULT);
boolean acknowledged = delete.isAcknowledged();
System.out.println("acknowledged = " + acknowledged);
2.4 总结
当使用es的客户端 RestHighLevelClient 时,对于索引库 *** 作,不涉及映射,先获取他的索引库客户端
// 创建索引 *** 作客户端 IndicesClient indices = client.indices();
然后借助idea的提示,会出现一系列API
每个API都可以同步或异步调用。 同步方法返回一个响应对象,而异步方法的名称以async后缀结尾,需要一个监听器参数,一旦收到响应或错误,就会被通知(由低级客户端管理的线程池)。
然后你就可以根据方法的提示,创建响应的api,做响应的 *** 作,其他们都有一个共同的接口爸爸IndicesRequest ,有兴趣可以多了解一下
对于索引库的 *** 作API:
- 创建索引库: CreateIndexRequest
- 查询索引库:GetIndexRequest
- 删除索引库:DeleteIndexRequest
对于索引的 *** 作是基于***IndexRequest来进行 *** 作的。
常见 *** 作中还有校验索引是否存在:exists
因为之前在2.1小章节,我们已经创建过索引库,所以这里就直接创建映射 *** 作
- 在kibana当中,在已有索引库创建映射如下
PUT /wang_index_01/_mapping
{
"properties": {
"address": {
"type": "text",
"analyzer": "ik_max_word"
},
"userName": {
"type": "keyword"
},
"userPhone": {
"type": "text",
"analyzer": "ik_max_word"
}
}
}
- 对应java代码如下:
PutMappingRequest putMappingRequest = new PutMappingRequest("wang_index_01");
XContentBuilder builder = XContentFactory.jsonBuilder()
.startObject()
.startObject("properties")
.startObject("address")
.field("type", "text")
.field("analyzer", "ik_max_word")
.endObject()
.startObject("userName")
.field("type", "keyword")
.endObject()
.startObject("userPhone")
.field("type", "text")
.field("analyzer", "ik_max_word")
.endObject()
.endObject()
.endObject();
PutMappingRequest source = putMappingRequest.source(builder);
AcknowledgedResponse acknowledgedResponse = client.indices().putMapping(source, RequestOptions.DEFAULT);
boolean acknowledged = acknowledgedResponse.isAcknowledged();
System.out.println("acknowledged = " + acknowledged);
其实代码根命令没啥区别,startObject你可以理解为是{ ,endObject可以理解为是} ,对应看kibana的命令你就特别熟悉了,简直一模一样;
3.2 查看映射- 在kibana当中,查询映射:
GET wang_index_01/_mapping
- 对应java代码如下:
GetMappingsRequest getMappingsRequest = new GetMappingsRequest();
getMappingsRequest.indices("wang_index_01");
GetMappingsResponse mapping = client.indices().getMapping(getMappingsRequest, RequestOptions.DEFAULT);
Map mappings = mapping.mappings();
Mappingmetadata metadata = mappings.get("wang_index_01");
String s = metadata.getSourceAsMap().toString();
System.out.println("s = " + s);
3.3 总结
涉及到索引库,映射 *** 作,其实在 client.indices() 都可以找到,见名思意就行;
- PutMappingRequest 新增映射
- GetMappingsRequest 查询映射
- kibana当中新增数据如下
POST wang_index_01/_doc/1
{
"address":"江西宜春上高泗溪镇",
"userName":"张三",
"userPhone":"15727538286"
}
- 对应代码:
MapjsonMap = new HashMap<>(); jsonMap.put("address", "江西宜春上高泗溪镇"); jsonMap.put("userName", "张三"); jsonMap.put("userPhone", "15727538286"); IndexRequest indexRequest = new IndexRequest("wang_index_01") .id("1").source(jsonMap); Map jsonMap2 = new HashMap<>(); jsonMap2.put("address", "江西宜春高安祥符镇"); jsonMap2.put("userName", "李四"); jsonMap2.put("userPhone", "15727538286"); IndexRequest indexRequest2 = new IndexRequest("wang_index_01") .id("2").source(jsonMap2); BulkRequest request = new BulkRequest(); request.add(indexRequest); request.add(indexRequest2); BulkResponse bulk = client.bulk(request, RequestOptions.DEFAULT); RestStatus status = bulk.status(); System.out.println("status = " + status);
这里我采用批量新增的方式新增了2条记录
更多详细参考官网: https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high-document-index.html
- kibana当中删除数据如下:
DELETE wang_index_01/_doc/1
- 对应java代码如下:
DeleteRequest deleteRequest = new DeleteRequest("wang_index_01");
deleteRequest.id("1");
DeleteResponse delete = client.delete(deleteRequest, RequestOptions.DEFAULT);
RestStatus status = delete.status();
System.out.println("status = " + status);
4.3 查询文档数据
- kibana当中查询数据如下:
GET wang_index_01/_doc/1
- 对应java代码如下:
GetRequest getRequest = new GetRequest("wang_index_01");
getRequest.id("1");
GetResponse documentFields = client.get(getRequest, RequestOptions.DEFAULT);
String sourceAsString = documentFields.getSourceAsString();
System.out.println("sourceAsString = " + sourceAsString);
4.4 修改文档数据
- kibana修改文档如下:
PUT wang_index_01/_doc/1
{
"address":"江西宜春上高泗溪镇",
"userName":"哈哈",
"userPhone":"15727538288"
}
- 对应java代码如下:
Map4.5 总结jsonMap = new HashMap<>(); jsonMap.put("address", "江西宜春上高泗溪镇"); jsonMap.put("userName", "哈哈"); jsonMap.put("userPhone", "15727538287"); UpdateRequest updateRequest = new UpdateRequest("wang_index_01","1"); updateRequest.doc(jsonMap); UpdateResponse update = client.update(updateRequest, RequestOptions.DEFAULT); RestStatus status = update.status(); System.out.println("status = " + status);
- BulkRequest 新增数据
- DeleteRequest 删除数据
- UpdateRequest 修改数据
- GetRequest 查询数据
ES最重要的环节就是搜索查询这一块,下面列举各种搜索 *** 作
5.1 查询所有 match_all- kibana查询所有:
GET wang_index_01/_search
{
"query": {
"match_all": {}
}
}
- 对应代码:
//创建搜索对象,入参可以为多个索引库参数
SearchRequest searchRequest = new SearchRequest("wang_index_01");
//创建查询构造器
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.matchAllQuery());
//设置查询构造器
searchRequest.source(searchSourceBuilder);
// 获取结果集
SearchResponse search = client.search(searchRequest, RequestOptions.DEFAULT);
SearchHit[] hits = search.getHits().getHits();
//遍历每一条记录
for (SearchHit hit : hits) {
String sourceAsString = hit.getSourceAsString();
System.out.println("sourceAsString = " + sourceAsString);
}
其中,kibana命令的match_all 其实就是对应 QueryBuilders.matchAllQuery() ,点开查询条件的构造器会可以看到更多你想看到的
官方提供了QueryBuilders工厂帮我们构建各种实现类:
- kibana
GET wang_index_01/_search
{
"query": {
"match": {
"userName": "哈哈"
}
}
}
- java代码如下:
//创建搜索对象,入参可以为多个索引库参数
SearchRequest searchRequest = new SearchRequest("wang_index_01");
//创建查询构造器
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.matchQuery("userName","哈哈"));
//设置查询构造器
searchRequest.source(searchSourceBuilder);
// 获取结果集
SearchResponse search = client.search(searchRequest, RequestOptions.DEFAULT);
SearchHit[] hits = search.getHits().getHits();
//遍历每一条记录
for (SearchHit hit : hits) {
String sourceAsString = hit.getSourceAsString();
System.out.println("sourceAsString = " + sourceAsString);
}
其实搜索类型的变化,仅仅是利用QueryBuilders构建的查询对象不同而已,其他代码基本一致:
- kibana
GET wang_index_01/_search
{
"query": {
"range": {
"userPhone": {
"gte": 15727538288,
"lte": 15727538289
}
}
}
}
- java代码如下:
//创建搜索对象,入参可以为多个索引库参数
SearchRequest searchRequest = new SearchRequest("wang_index_01");
//创建查询构造器
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.rangeQuery("userPhone").gt("15727538288").lt("15727538289"));
//设置查询构造器
searchRequest.source(searchSourceBuilder);
// 获取结果集
SearchResponse search = client.search(searchRequest, RequestOptions.DEFAULT);
SearchHit[] hits = search.getHits().getHits();
//遍历每一条记录
for (SearchHit hit : hits) {
String sourceAsString = hit.getSourceAsString();
System.out.println("sourceAsString = " + sourceAsString);
}
同理,查询构造器为 QueryBuilders.rangeQuery
5.4 过滤source之前我们说过哈,每个字段都会保存在source下一份数据,所以默认store=false,
如果我们要做过滤 *** 作,那肯定就从source入手;
- kibana命令如下:
GET wang_index_01/_search
{
"_source": [
"userPhone"
],
"query": {
"match_all": {}
}
}
- java代码如下:
//创建搜索对象,入参可以为多个索引库参数
SearchRequest searchRequest = new SearchRequest("wang_index_01");
//创建查询构造器
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.matchAllQuery());
searchSourceBuilder.fetchField("userPhone");
//设置查询构造器
searchRequest.source(searchSourceBuilder);
// 获取结果集
SearchResponse search = client.search(searchRequest, RequestOptions.DEFAULT);
SearchHit[] hits = search.getHits().getHits();
for (SearchHit hit : hits) {
String sourceAsString = hit.getSourceAsString();
System.out.println("sourceAsString = " + sourceAsString);
}
5.5 排序 sort
- kibana排序如下:
GET wang_index_01/_search
{
"query": {
"match_all": {
"boost": 1
}
},
"fields": [
{
"field": "userPhone"
}
],
"sort": [
{
"userName": {
"order": "asc"
}
}
]
}
注意,排序的字段一定不能是可分词的,不然会出现如下错误:
Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead
- java代码如下所示:
//创建搜索对象,入参可以为多个索引库参数
SearchRequest searchRequest = new SearchRequest("wang_index_01");
//创建查询构造器
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.matchAllQuery());
searchSourceBuilder.sort(new FieldSortBuilder("userName").order(SortOrder.ASC));
searchSourceBuilder.fetchField("userPhone");
//设置查询构造器
searchRequest.source(searchSourceBuilder);
// 获取结果集
SearchResponse search = client.search(searchRequest, RequestOptions.DEFAULT);
SearchHit[] hits = search.getHits().getHits();
for (SearchHit hit : hits) {
String sourceAsString = hit.getSourceAsString();
System.out.println("sourceAsString = " + sourceAsString);
}
5.6 分页 from size
- kibana命令如下:
GET wang_index_01/_search
{
"query": {
"match_all": {}
},
"from": 0,
"size": 3
}
- java代码如下:
SearchRequest searchRequest = new SearchRequest("wang_index_01");
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.matchAllQuery());
searchRequest.source(searchSourceBuilder);
// 添加分页
int page = 1;
int size = 3;
int start = (page - 1) * size;
// 配置分页
searchSourceBuilder.from(start);
searchSourceBuilder.size(3);
// 获取结果集
SearchResponse search = client.search(searchRequest, RequestOptions.DEFAULT);
SearchHit[] hits = search.getHits().getHits();
for (SearchHit hit : hits) {
String sourceAsString = hit.getSourceAsString();
System.out.println("sourceAsString = " + sourceAsString);
}
5.7 聚合 aggs 之 度量(metrics)
aggregations实体包含了所有的聚合查询,如果是多个聚合查询可以用数组,如果只有一个聚合查询使用对象,aggregations也可以简写为aggs。
- kibana 聚合 *** 作如下:
GET wang_index_01/_search
{
"query": {
"match_all": {}
},
"aggs": {
"countUseName": {
"value_count": {
"field": "userName"
}
}
}
}
- java代码如下:
SearchRequest searchRequest = new SearchRequest("wang_index_01");
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.matchAllQuery());
// count统计
searchSourceBuilder.aggregation(AggregationBuilders.count("countUseName").field("userName"));
searchRequest.source(searchSourceBuilder);
// 获取结果集
SearchResponse search = client.search(searchRequest, RequestOptions.DEFAULT);
SearchHit[] hits = search.getHits().getHits();
for (SearchHit hit : hits) {
String sourceAsString = hit.getSourceAsString();
System.out.println("sourceAsString = " + sourceAsString);
}
其实度量等同于mysql里面的求最大值,最小值,平均值,求和,统计数量等,针对的是某个字段而言
Bucket 聚合不像metrics 那样基于某一个值去计算,每一个Bucket (桶)是按照我们定义的准则去判断数据是否会落入桶(bucket)中。一个单独的响应中,bucket(桶)的最大个数默认是10000,我们可以通过serarch.max_buckets去进行调整。
Bucket聚合查询就像是数据库中的group by
- kibana举例如下:
GET wang_index/_search
{
"query": {
"match_all": {
"boost": 1
}
},
"aggregations": {
"genderCount": {
"terms": {
"field": "gender",
"size": 10,
"min_doc_count": 1,
"shard_min_doc_count": 0,
"show_term_doc_count_error": false,
"order": [
{
"_count": "desc"
},
{
"_key": "asc"
}
]
}
},
"balanceAvg": {
"avg": {
"field": "balance"
}
}
}
}
注意,只有不可分词才能参与聚合;
- 对应java代码如下:
//1、创建查询请求,规定查询的索引
SearchRequest request = new SearchRequest("wang_index");
//2、创建条件构造
SearchSourceBuilder builder = new SearchSourceBuilder();
//3、构造条件
MatchAllQueryBuilder matchAllQueryBuilder = QueryBuilders.matchAllQuery();
builder.query(matchAllQueryBuilder);
//聚合年龄分布
TermsAggregationBuilder ageAgg = AggregationBuilders.terms("genderCount").field("gender");
builder.aggregation(ageAgg);
//聚合平均年龄
AvgAggregationBuilder balanceAvg = AggregationBuilders.avg("balanceAvg").field("balance");
builder.aggregation(balanceAvg);
//4、将构造好的条件放入请求中
request.source(builder);
//5、开始执行发送request请求
SearchResponse searchResponse = client.search(request, RequestOptions.DEFAULT);
//6、开始处理返回的数据
SearchHit[] hits = searchResponse.getHits().getHits();
List list = new ArrayList();
for (SearchHit hit : hits) {
String hitString = hit.getSourceAsString();
System.out.println(hitString);
list.add(hitString);
}
Map asMap = searchResponse.getAggregations().getAsMap();
System.out.println("asMap = " + asMap);
5.8 高亮
- kibana举例如下:
GET wang_index_01/_search
{
"query": {
"match_all": {
"boost": 1
}
},
"highlight": {
"pre_tags": [
""
],
"post_tags": [
""
],
"fields": {
"userName": {}
}
}
}
- 对应java代码如下:
SearchRequest searchRequest = new SearchRequest("wang_index_01");
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.matchAllQuery());
HighlightBuilder highlightBuilder = new HighlightBuilder();
highlightBuilder.field("userName")
.preTags("""")
.postTags("");
searchSourceBuilder.highlighter(highlightBuilder);
searchRequest.source(searchSourceBuilder);
// 获取结果集
SearchResponse search = client.search(searchRequest, RequestOptions.DEFAULT);
SearchHit[] hits = search.getHits().getHits();
for (SearchHit hit : hits) {
String sourceAsString = hit.getSourceAsString();
System.out.println("sourceAsString = " + sourceAsString);
}
高亮其实要配合前端做才好看;
欢迎分享,转载请注明来源:内存溢出
微信扫一扫
支付宝扫一扫
评论列表(0条)