elasticsearch: All primary shards become inactive when restart a single node cluster -
i started new elasticsearch cluster , added new index, shut down. when try start again there no active shards more. shards became inactive , unassigned. use index settings below:
"settings": { "number_of_shards": 32, "number_of_replicas": 3 }
here health output:
{ "cluster_name" : "sailcraft", "status" : "red", "timed_out" : false, "number_of_nodes" : 1, "number_of_data_nodes" : 1, "active_primary_shards" : 0, "active_shards" : 0, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 128, "number_of_pending_tasks" : 0 }
here same problem has been marked solved. don't think that's solution.
edit:
it's same question think post not right answer. said need 2 more nodes if have 2 more replica shards. read related docs , there not such restrictions.
edit2: solution: setting index.recovery.initial_shards provides following: basically, when using local gateway, "shard" recovered once quorum of copies found among nodes in cluster. lets take n shard, 1 replica (2 copies each shard). default (quorum) means once singe copy of shard found, recovered. if have 2 replicas (3 copies), recover shard once 2 copies found. can set setting, in case lost large number of nodes, , quorum strict setting. can set on "live" index (which in "red" state, obviously, because not shards recovered).
here's the documentation mentioning this. indeed, it's not evident, it's there:
index.recovery.initial_shards
when using local gateway particular shard recovered if there can allocated quorum shards in cluster. can set to:
quorum (default)
quorum-1 (or half)
full
full-1.
number values supported, e.g. 1.
if want have node running, set index.recovery.initial_shards: 1
in elasticsearch.yml
file.
Comments
Post a Comment