* Hbase as the Storage Backend for the Graph Repository
*Hbase as the Storage Backend for the Graph Repository*
By default, Atlas uses Titan as the graph repository and is the only graph repository implementation available currently.
The HBase versions currently supported are 1.1.x. For configuring ATLAS graph persistence on HBase, please go through the "Configuration - Graph persistence engine - HBase" section
The HBase versions currently supported are 1.1.x. For configuring ATLAS graph persistence on HBase, please see "Graph persistence engine - HBase" in the [[Configuration][Configuration]] section
for more details.
Pre-requisites for running HBase as a distributed cluster
* 3 or 5 ZooKeeper nodes
* Atleast 3 RegionServer nodes. It would be ideal to run the DataNodes on the same hosts as the Region servers for data locality.
* 3 or 5 ZooKeeper nodes
* Atleast 3 RegionServer nodes. It would be ideal to run the DataNodes on the same hosts as the Region servers for data locality.
* Configuring SOLR as the Indexing Backend for the Graph Repository
*Configuring SOLR as the Indexing Backend for the Graph Repository*
By default, Atlas uses Titan as the graph repository and is the only graph repository implementation available currently.
For configuring Titan to work with Solr, please follow the instructions below
<verbatim>
* Install solr if not already running. The version of SOLR supported is 5.2.1. Could be installed from http://archive.apache.org/dist/lucene/solr/5.2.1/solr-5.2.1.tgz
* Start solr in cloud mode.
* Install solr if not already running. The version of SOLR supported is 5.2.1. Could be installed from http://archive.apache.org/dist/lucene/solr/5.2.1/solr-5.2.1.tgz
* Start solr in cloud mode.
SolrCloud mode uses a ZooKeeper Service as a highly available, central location for cluster management.
For a small cluster, running with an existing ZooKeeper quorum should be fine. For larger clusters, you would want to run separate multiple ZooKeeper quorum with atleast 3 servers.
Note: Atlas currently supports solr in "cloud" mode only. "http" mode is not supported. For more information, refer solr documentation - https://cwiki.apache.org/confluence/display/solr/SolrCloud
* For e.g., to bring up a Solr node listening on port 8983 on a machine, you can use the command:
* For e.g., to bring up a Solr node listening on port 8983 on a machine, you can use the command:
* Run the following commands from SOLR_HOME directory to create collections in Solr corresponding to the indexes that Atlas uses. In the case that the ATLAS and SOLR instance are on 2 different hosts,
* Run the following commands from SOLR_HOME directory to create collections in Solr corresponding to the indexes that Atlas uses. In the case that the ATLAS and SOLR instance are on 2 different hosts,
first copy the required configuration files from ATLAS_HOME/conf/solr on the ATLAS instance host to the Solr instance host. SOLR_CONF in the below mentioned commands refer to the directory where the solr configuration files
Note: If numShards and replicationFactor are not specified, they default to 1 which suffices if you are trying out solr with ATLAS on a single node instance.
Otherwise specify numShards according to the number of hosts that are in the Solr cluster and the maxShardsPerNode configuration.
...
...
@@ -162,14 +167,15 @@ For configuring Titan to work with Solr, please follow the instructions below
The number of replicas (replicationFactor) can be set according to the redundancy required.
* Change ATLAS configuration to point to the Solr instance setup. Please make sure the following configurations are set to the below values in ATLAS_HOME//conf/application.properties
* Change ATLAS configuration to point to the Solr instance setup. Please make sure the following configurations are set to the below values in ATLAS_HOME//conf/application.properties
<verbatim>
atlas.graph.index.search.backend=solr5
atlas.graph.index.search.solr.mode=cloud
atlas.graph.index.search.solr.zookeeper-url=<the ZK quorum setup for solr as comma separated value> eg: 10.1.6.4:2181,10.1.6.5:2181
* Restart Atlas
</verbatim>
* Restart Atlas
For more information on Titan solr configuration , please refer http://s3.thinkaurelius.com/docs/titan/0.5.4/solr.htm
* atlas server starts with conf from {package dir}/conf. To override this (to use the same conf
with multiple atlas upgrades), set environment variable METADATA_CONF to the path of conf dir
* To change the port, use -port option.
* atlas server starts with conf from {package dir}/conf. To override this (to use the same conf with multiple atlas upgrades), set environment variable METADATA_CONF to the path of conf dir
Once atlas is started, you can view the status of atlas entities using the Web-based
dashboard. \You can open your browser at the corresponding port to use the web UI.
Once atlas is started, you can view the status of atlas entities using the Web-based dashboard. You can open your browser at the corresponding port to use the web UI.
@@ -100,8 +100,111 @@ The property required for authenticating to the server (if authentication is ena
* <code>atlas.http.authentication.type</code> (simple|kerberos) [default: simple] - the authentication type
---+++ SOLR Kerberos configuration
If the authentication type specified is 'kerberos', then the kerberos ticket cache will be accessed for authenticating to the server (Therefore the client is required to authenticate to the KDC prior to communication with the server using 'kinit' or a similar mechanism).
See [[https://cwiki.apache.org/confluence/display/RANGER/How+to+configure+Solr+Cloud+with+Kerberos+for+Ranger+0.5][the Apache SOLR Kerberos configuration]].
* Add principal and generate the keytab file for solr. Create a keytab per host for each host where Solr is going to run and use the principal name with the host (e.g. addprinc -randkey solr/${HOST1}@EXAMPLE.COM. Replace ${HOST1} with the actual host names).
* Add principal and generate the keytab file for authenticating HTTP request. (Note that if Ambari is used to Kerberize the cluster, the keytab /etc/security/keytabs/spnego.service.keytab can be used)
* Create collections in Solr corresponding to the indexes that Atlas uses and change the Atlas configuration to point to the Solr instance setup as described in the [[InstallationSteps][Install Steps]].