偏重搭建开发环境,github 上有项目源代码,用关键词xunwu很容易搜索到。ElasticSearch 版本 5.6.1。

  1. 安装 mysql 并将 xunwu.sql 数据导入
  2. 安装 ElasticSearch(version=5.6.1),设置集群名称为 xunwu-es,并安装 elasticsearch-analysis-ik 插件,启动es(如果需要使用 head,需要在配置文件中设置跨域)
  3. 安装 kafka(http://kafka.apache.org) 启动并创建 xunwu_topic 主题
  4. 安装 zookeeper (http://zookeeper.apache.org) 启动
  5. http://localhost:9200/xunwu 发送 json 请求,请求内容为 xunwu-web/src/main/resources/db/house_index_with_suggest_mapping.json
  6. 安装 redis 并启动

启动 ElasticSearch

后台启动:

1
/opt/elasticsearch-5.6.1/bin/elasticsearch -d

启动 kibana

1
nohup /opt/kibana-5.6.1-linux-x86_64/bin/kibana &

访问 kibana

通过 kibana 向 es 发送 json 请求

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
PUT xunwu
{
"settings": {
"number_of_replicas": 0,
"number_of_shards": 5,
"index.store.type": "niofs",
"index.query.default_field": "title",
"index.unassigned.node_left.delayed_timeout": "5m"
},
"mappings": {
"house": {
"dynamic": "strict",
"_all":{
"enabled": false
},
"properties": {
"houseId": {
"type": "long"
},
"title": {
"type": "text",
"index": "analyzed",
"analyzer": "ik_smart",
"search_analyzer": "ik_smart"
},
"price": {
"type": "integer"
},
"area": {
"type": "integer"
},
"createTime": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"lastUpdateTime": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"cityEnName": {
"type": "keyword"
},
"regionEnName": {
"type": "keyword"
},
"direction": {
"type": "integer"
},
"distanceToSubway": {
"type": "integer"
},
"subwayLineName": {
"type": "keyword"
},
"subwayStationName": {
"type": "keyword"
},
"tags": {
"type": "text"
},
"street": {
"type": "keyword"
},
"district": {
"type": "keyword"
},
"description": {
"type": "text",
"index": "analyzed",
"analyzer": "ik_smart",
"search_analyzer": "ik_smart"
},
"layoutDesc" : {
"type": "text",
"index": "analyzed",
"analyzer": "ik_smart",
"search_analyzer": "ik_smart"
},
"traffic": {
"type": "text",
"index": "analyzed",
"analyzer": "ik_smart",
"search_analyzer": "ik_smart"
},
"roundService": {
"type": "text",
"index": "analyzed",
"analyzer": "ik_smart",
"search_analyzer": "ik_smart"
},
"rentWay": {
"type": "integer"
},
"suggest":{
"type": "completion"
},
"location":{
"type": "geo_point"
}
}
}
}
}

以上脚本如果在 ElasticSearch 6.X 执行,会报错

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "Failed to parse mapping [house]: Could not convert [roundService.index] to boolean"
}
],
"type": "mapper_parsing_exception",
"reason": "Failed to parse mapping [house]: Could not convert [roundService.index] to boolean",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Could not convert [roundService.index] to boolean",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Failed to parse value [analyzed] as only [true] or [false] are allowed."
}
}
},
"status": 400
}

把上面的请求数据中的analyzed改为true即可。

提交之后,返回数据如下:

1
2
3
4
5
6
#! Deprecation: [_all] is deprecated in 6.0+ and will be removed in 7.0. As a replacement, you can use [copy_to] on mapping fields to create your own catch all field.
{
"acknowledged": true,
"shards_acknowledged": true,
"index": "xunwu"
}

原因

  • ElasticSearch 6.X 的 index 只能是 true 或 false
  • mapping parameters 中的 index 只能接收一个 bool 值,true 或者 false
  • 这是 6.X 的新变化。

  • 之前的版本

    1
    2
    3
    4
    index分析:
    analyzed(默认)
    not_analyzed
    no

start server

启动 zookeeper

/home/utomcat/soft/zookeeper-3.4.12/bin/zkServer.sh start &

1
2
3
4
5
6
# cp /home/utomcat/soft/zookeeper-3.4.12/conf/zoo_sample.cfg /home/utomcat/soft/zookeeper-3.4.12/conf/zoo.cfg
# /home/utomcat/soft/zookeeper-3.4.12/bin/zkServer.sh start
[utomcat@AndyCentOS7Basic conf]$ /home/utomcat/soft/zookeeper-3.4.12/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/utomcat/soft/zookeeper-3.4.12/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

也可以用 kafka 自带的启动脚本启动 zookeeper 启动

/home/utomcat/soft/kafka_2.11-2.0.0/bin/zookeeper-server-start.sh /home/utomcat/soft/kafka_2.11-2.0.0/config/zookeeper.properties &

启动 kafka

/home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-server-start.sh /home/utomcat/soft/kafka_2.11-2.0.0/config/server.properties &

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
# /home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-server-start.sh /home/utomcat/soft/kafka_2.11-2.0.0/config/server.properties &
[utomcat@AndyCentOS7Basic bin]$ /home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-server-start.sh /home/utomcat/soft/kafka_2.11-2.0.0/config/server.properties &
[1] 2242
[utomcat@AndyCentOS7Basic bin]$ [2018-09-20 15:59:26,257] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2018-09-20 15:59:26,492] INFO starting (kafka.server.KafkaServer)
[2018-09-20 15:59:26,493] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2018-09-20 15:59:26,507] INFO [ZooKeeperClient] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2018-09-20 15:59:26,548] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,548] INFO Client environment:host.name=223.87.179.191 (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,548] INFO Client environment:java.version=1.8.0_161 (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,548] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,548] INFO Client environment:java.home=/usr/jdk1.8.0_161/jre (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,548] INFO Client environment:java.class.path=/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/activation-1.1.1.jar:/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/aopalliance-repackaged-2.5.0-b42.jar:/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/argparse4j-0.7.0.jar:/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/audience-annotations-0.5.0.jar:/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/commons-lang3-3.5.jar:/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/connect-api-2.0.0.jar:/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/connect-basic-auth-extension-2.0.0.jar:/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/connect-file-2.0.0.jar:/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/connect-json-2.0.0.jar:/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/connect-runtime-2.0.0.jar:/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/connect-transforms-2.0.0.jar:/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/guava-20.0.jar:/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/hk2-api-2.5.0-b42.jar:/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/hk2-locator-2.5.0-b42.jar:/home/utomcat/soft/kafka_2.11-2.0.0/bin/../libs/hk2-utils-2.5.0-b42.jar:/home/utomcat/soft/kafka_2.11-2.)
[2018-09-20 15:59:26,548] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,548] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,548] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,548] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,548] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,548] INFO Client environment:os.version=3.10.0-862.2.3.el7.x86_64 (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,548] INFO Client environment:user.name=utomcat (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,548] INFO Client environment:user.home=/home/utomcat (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,549] INFO Client environment:user.dir=/home/utomcat/soft/kafka_2.11-2.0.0/bin (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,549] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@3febb011 (org.apache.zookeeper.ZooKeeper)
[2018-09-20 15:59:26,565] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-09-20 15:59:26,567] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2018-09-20 15:59:26,568] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2018-09-20 15:59:26,587] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x10004c56b230000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-09-20 15:59:26,589] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2018-09-20 15:59:26,830] INFO Cluster ID = 9HOtQ4rMRmqdD5tTso0oqw (kafka.server.KafkaServer)
[2018-09-20 15:59:26,834] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-09-20 15:59:26,879] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-09-20 15:59:26,886] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-09-20 15:59:26,914] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-09-20 15:59:26,915] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-09-20 15:59:26,916] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-09-20 15:59:26,953] INFO Log directory /tmp/kafka-logs not found, creating it. (kafka.log.LogManager)
[2018-09-20 15:59:26,965] INFO Loading logs. (kafka.log.LogManager)
[2018-09-20 15:59:26,972] INFO Logs loading complete in 7 ms. (kafka.log.LogManager)
[2018-09-20 15:59:26,983] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2018-09-20 15:59:26,984] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2018-09-20 15:59:27,365] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2018-09-20 15:59:27,387] INFO [SocketServer brokerId=0] Started 1 acceptor threads (kafka.network.SocketServer)
[2018-09-20 15:59:27,415] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2018-09-20 15:59:27,419] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2018-09-20 15:59:27,423] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2018-09-20 15:59:27,426] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2018-09-20 15:59:27,455] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2018-09-20 15:59:27,460] INFO Result of znode creation at /brokers/ids/0 is: OK (kafka.zk.KafkaZkClient)
[2018-09-20 15:59:27,461] INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(223.87.179.191,9092,ListenerName(PLAINTEXT),PLAINTEXT)) (kafka.zk.KafkaZkClient)
[2018-09-20 15:59:27,462] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-09-20 15:59:27,500] INFO Creating /controller (is it secure? false) (kafka.zk.KafkaZkClient)
[2018-09-20 15:59:27,505] INFO Result of znode creation at /controller is: OK (kafka.zk.KafkaZkClient)
[2018-09-20 15:59:27,512] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2018-09-20 15:59:27,513] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2018-09-20 15:59:27,533] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2018-09-20 15:59:27,543] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2018-09-20 15:59:27,543] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2018-09-20 15:59:27,545] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-09-20 15:59:27,554] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2018-09-20 15:59:27,568] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2018-09-20 15:59:27,582] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2018-09-20 15:59:27,595] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2018-09-20 15:59:27,618] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2018-09-20 15:59:27,641] INFO [SocketServer brokerId=0] Started processors for 1 acceptors (kafka.network.SocketServer)
[2018-09-20 15:59:27,642] INFO Kafka version : 2.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-09-20 15:59:27,642] INFO Kafka commitId : 3402a8361b734732 (org.apache.kafka.common.utils.AppInfoParser)
[2018-09-20 15:59:27,643] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)

启动第二个 broker

/home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-server-start.sh /home/utomcat/soft/kafka_2.11-2.0.0/config/server02.properties &

停止

/home/utomcat/soft/kafka_2.11-2.0.0/bin//kafka-server-stop.sh /home/utomcat/soft/kafka_2.11-2.0.0/config/server.properties

查看 kafka 的 topic

列出该zookeeper中记录在案的topic列表,只有名字

1
/home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-topics.sh --list --zookeeper 127.0.0.1:2181

创建 topic

/home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic xunwu_topic

1
2
3
[utomcat@AndyCentOS7Basic conf]$ /home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic xunwu_topic
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "xunwu_topic".

/home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic house_build

1
2
3
[utomcat@AndyCentOS7Basic ~]$ /home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic house_build
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "house_build".

kafka 查看 Topic 的分区和副本情况

/home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-topics.sh --describe --zookeeper 127.0.0.1:2181 --topic xunwu_topic

1
2
3
[utomcat@AndyCentOS7Basic ~]$ /home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-topics.sh --describe --zookeeper 127.0.0.1:2181  --topic xunwu_topic
Topic:xunwu_topic PartitionCount:1 ReplicationFactor:1 Configs:
Topic: xunwu_topic Partition: 0 Leader: 0 Replicas: 0 Isr: 0

删除 topic

/home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic house_build

1
2
3
[utomcat@AndyCentOS7Basic ~]$ /home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic house_build
Topic house_build is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.

  • 监听出错

    1
    2
    3
    4
    5
    # spring boot 项目提示:
    Error while fetching metadata with correlation id 28 : {xunwu_topic=LEADER_NOT_AVAILABLE}

    # kafka 服务提示:
    [2018-09-26 08:43:37,312] ERROR [KafkaApi-0] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
  • 解决
    修改配置 vi config/server.properties

    1
    listeners=PLAINTEXT://localhost:9092

测试发消息

/home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic xunwu_topic

  • 报错如下

    1
    [2018-09-27 15:26:06,053] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 38 : {xunwu_topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
  • 解决
    修改配置 vi config/server.properties

    1
    listeners=PLAINTEXT://localhost:9092
  • 重点
    如果配置里写的是具体的 IP 地址,则要把 localhost 也换成具体的 IP 地址,否则会报同样的错误。

测试消费消息

/home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic xunwu_topic --from-beginning --consumer.config /home/utomcat/soft/kafka_2.11-2.0.0/config/consumer.properties

1
2
3
[utomcat@AndyCentOS7Basic ~]$ /home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic xunwu_topic --from-beginning --consumer.config /home/utomcat/soft/kafka_2.11-2.0.0/config/consumer.properties
[2018-09-27 16:59:28,169] WARN [Consumer clientId=consumer-1, groupId=test-consumer-group] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-09-27 16:59:28,227] WARN [Consumer clientId=consumer-1, groupId=test-consumer-group] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

/home/utomcat/soft/kafka_2.11-2.0.0/bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.88:9092 --topic xunwu_topic --from-beginning --consumer.config /home/utomcat/soft/kafka_2.11-2.0.0/config/consumer.properties

redis

参考
redis Download
4.0.11

下载 & 编译安装

1
2
3
4
http://download.redis.io/releases/redis-4.0.11.tar.gz
tar -xzvf redis-4.0.11.tar.gz
cd redis-4.0.11
make MALLOC=libc

make MALLOC=libc 命令执行完成编译后,会在src目录下生成6个可执行文件,分别是redis-server、redis-cli、redis-benchmark、redis-check-aof、redis-check-rdb、redis-sentinel

  • 以 utomcat 操作,会因为权限问题报错

    1
    2
    3
    4
    5
    6
    7
    8
    [utomcat@AndyCentOS7Basic redis-4.0.11]$ cd src && make install
    CC Makefile.dep

    Hint: It's a good idea to run 'make test' ;)

    INSTALL install
    install: cannot create regular file ‘/usr/local/bin/redis-server’: Permission denied
    make: *** [install] Error 1
  • 以下,以 root 身份操作

    1
    2
    3
    4
    5
    6
    7
    8
    9
    [root@AndyCentOS7Basic redis-4.0.11]# cd src && make install

    Hint: It's a good idea to run 'make test' ;)

    INSTALL install
    INSTALL install
    INSTALL install
    INSTALL install
    INSTALL install

启动 redis

  • 以 root 的身份运行
    /home/utomcat/soft/redis-4.0.11/src/redis-server /home/utomcat/soft/redis-4.0.11/redis.conf
    1
    2
    3
    4
    [root@AndyCentOS7Basic src]# /home/utomcat/soft/redis-4.0.11/src/redis-server  /home/utomcat/soft/redis-4.0.11/redis.conf
    7229:C 20 Sep 18:18:54.104 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
    7229:C 20 Sep 18:18:54.104 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=7229, just started
    7229:C 20 Sep 18:18:54.104 # Configuration loaded

开启远程连接

1
2
# bind 127.0.0.1 改为
bind 0.0.0.0

重启生效

停止

1
/home/utomcat/soft/redis-4.0.11/src/redis-cli shutdown

查看进程

1
2
3
[root@AndyCentOS7Basic src]# ps -aux | grep redis
root 7230 0.1 0.0 141836 2040 ? Ssl 18:18 0:02 /home/utomcat/soft/redis-4.0.11/src/redis-server 127.0.0.1:6379
root 8961 0.0 0.0 112704 976 pts/2 S+ 18:54 0:00 grep --color=auto redis

更新房源报错

1
2
3
4
5
2018-09-26 07:48:18.055 ERROR 18272 --- [io-8080-exec-10] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='{"houseId":21,"operation":"index","retry":0}' to topic xunwu_topic:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
...
2018-09-26 07:48:18.055 DEBUG 18272 --- [io-8080-exec-10] org.hibernate.SQL : update house set admin_id=?, area=?, bathroom=?, build_year=?, city_en_name=?, cover=?, create_time=?, direction=?, distance_to_subway=?, district=?, floor=?, last_update_time=?, parlour=?, price=?, region_en_name=?, room=?, status=?, street=?, title=?, total_floor=?, watch_times=? where id=?
2018-09-26 07:49:05.722 ERROR 18272 --- [nio-8080-exec-2] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='{"houseId":21,"operation":"index","retry":0}' to topic xunwu_topic:

house_build 已经被删除的情况

1
2018-09-27 16:06:13.342 ERROR 9804 --- [nio-8080-exec-3] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='{"houseId":20,"operation":"remove","retry":0}' to topic house_build:

综合以上,topic 不存在和存在都报同样的错误,可能推测,topic 存在的时候也报错,应该是访问不到 kafka 服务,当然,如果 kafka 没有启动,或者出现故障,也会导致这个错误。

解决

1
2
# vi server.properties
listeners=PLAINTEXT://192.168.1.88:9092

这样,就可以在项目里配置成 IP 来访问 kafka 了。

但是,在修改监听 IP 地址之前,该配置是注释状态的情况下,提示

1
[2018-09-26 08:43:37,312] ERROR [KafkaApi-0] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)

会让人误解,是可以连接 kafka 服务的。

modelmapper

official website

搜索实践

1
2
3
4
轻工作娱松轻娱松满b轻松bb足娱娱娱轻娱松乐aaa消工作娱工作娱娱轻工作松娱工作费abc
轻工作娱松轻娱松满足娱娱娱轻娱松乐aaa消工作娱工作娱娱轻工作松娱工作费abc

含【轻松】、【满足】即可查询到

搜索结果:houseID=21

分词

simple 分词器

whitespace 分词器

仅用空格分词,不会去掉数字。