(888) 685-3101 ext. 2

In this post (my second post in a series on Elastic Search and Liferay) we are going to see how to install and configure Elastic Search on our computer and perform simple queries. I am going to use 0.19.4 version in Windows, but the process is really similar with other versions / platforms.

Follow these steps:

  1. Select and download a version of Elastic Search.
  2. Unzip in two different locations (yes, we are going to create a simple cluster with two nodes):
    • C:ProjectsElasticSearchelasticsearch-0.19.4-node-1
    • C:ProjectsElasticSearchelasticsearch-0.19.4-node-2
  3. Modify node_nameconfigelasticsearch.yml in both nodes:
    Original elasticsearch-0.19.4-node-1configelasticsearch.yml elasticsearch-0.19.4-node-2configelasticsearch.yml
    […]################ Cluster #############

    # Cluster name identifies your cluster for auto-discovery. If you’re running
    # multiple clusters on the same network, make sure you’re using unique names.
    #
    # cluster.name: elasticsearch
    ################  Node #############
    # Node names are generated dynamically on startup, so you’re relieved
    # from configuring them manually. You can tie this node to a specific name:
    #
    # node.name: “Franz Kafka”
    # Every node can be configured to allow or deny being eligible as the master,
    # and to allow or deny to store the data.
    #
    # Allow this node to be eligible as a master node (enabled by default):
    #
    # node.master: true
    #
    # Allow this node to store data (enabled by default):
    #
    # node.data: true
    […]
    # Set a custom port for the node to node communication (9300 by default):
    #
    # transport.tcp.port: 9300
    # Enable compression for all communication between nodes (disabled by default):
    #
    # transport.tcp.compress: true
    # Set a custom port to listen for HTTP traffic:
    #
    # http.port: 9200
    […]
    […]################ Cluster #############

    # Cluster name identifies your cluster for auto-discovery. If you’re running
    # multiple clusters on the same network, make sure you’re using unique names.
    #
    cluster.name: my_cluster_name
    ################  Node #############
    # Node names are generated dynamically on startup, so you’re relieved
    # from configuring them manually. You can tie this node to a specific name:
    #
    node.name: node1
    # Every node can be configured to allow or deny being eligible as the master,
    # and to allow or deny to store the data.
    #
    # Allow this node to be eligible as a master node (enabled by default):
    #
     node.master: true
    #
    # Allow this node to store data (enabled by default):
    #
     node.data: true
    […]
    # Set a custom port for the node to node communication (9300 by default):
    #
     transport.tcp.port: 9300
    # Enable compression for all communication between nodes (disabled by default):
    #
    # transport.tcp.compress: true
    # Set a custom port to listen for HTTP traffic:
    #
     http.port: 9200
    […]
    […]################ Cluster #############

    # Cluster name identifies your cluster for auto-discovery. If you’re running
    # multiple clusters on the same network, make sure you’re using unique names.
    #
    cluster.name: my_cluster_name
    ################  Node #############
    # Node names are generated dynamically on startup, so you’re relieved
    # from configuring them manually. You can tie this node to a specific name:
    #
    node.name: node2
    # Every node can be configured to allow or deny being eligible as the master,
    # and to allow or deny to store the data.
    #
    # Allow this node to be eligible as a master node (enabled by default):
    #
     node.master: false
    #
    # Allow this node to store data (enabled by default):
    #
     node.data: true
    […]
    # Set a custom port for the node to node communication (9300 by default):
    #
     transport.tcp.port: 9301
    # Enable compression for all communication between nodes (disabled by default):
    #
    # transport.tcp.compress: true
    # Set a custom port to listen for HTTP traffic:
    #
     http.port: 9201
    […]
  4. It may be necessary to copy several jars into lib folder. If they are not present, be sure to copy these jars in both nodes:
    1. elasticsearch-0.19.4.jar
    2. jna-3.3.0.jar
    3. jts-1.12.jar
    4. log4j-1.2.17.jar
    5. lucene-analyzers-3.6.2.jar
    6. lucene-core-3.6.2.jar
    7. lucene-highlighter-3.6.2.jar
    8. lucene-memory-3.6.2.jar
    9. lucene-queries-3.6.2.jar
    10. snappy-java-1.0.4.1.jar
    11. spatial4j-0.3.jar

    And if you want to use analyzers, may be also in pluginsanalysis-icu folder,

    1. elasticsearch-analysis-icu-1.7.0.jar
    2. icu4j-4.8.1.1.jar
    3. lucene-icu-3.6.1.jar
  5. Create a file named start_elasticsearch_cluster.bat like this:
    start call "c:/Projects/ElasticSearch/elasticsearch-0.19.4-node-1/bin/elasticsearch.bat"
    PING -n 1 -w 15000 1.1.1.1>NUL
    start call "c:/Projects/ElasticSearch/elasticsearch-0.19.4-node-2/bin/elasticsearch.bat"
    What does this file do?  It starts the first node, it waits for 15 seconds (just pinging for 15 seconds to nowhere and redirecting the result to nowhere :), this is really useful for wait some time until doing something else), and then it starts the second node.
  6. Go to http://localhost:9200. You will see a page like this:
    {
    “ok” : true,
    “status” : 200,
    “name” : “node1”,
    “version” : {
    “number” : “0.19.4”,
    “snapshot_build” : false
    },
    “tagline” : “You Know, for Search”
    }And right now, you are prepared to create an index, add a document to the index, and perform amazingly fast searches against your new Elastic Search Cluster!

If you have any question, need help or want to share your perspective, just comment at the bottom of this post or contact XTIVIA, my employer!

Share This