HighAvailability.md 16.2 KB
Newer Older
1 2 3 4 5 6
---
name: High Availability
route: /HighAvailability
menu: Documentation
submenu: Features
---
7

8 9 10 11 12 13 14
import  themen  from 'theme/styles/styled-colors';
import  * as theme  from 'react-syntax-highlighter/dist/esm/styles/hljs';
import SyntaxHighlighter from 'react-syntax-highlighter';

# Fault Tolerance and High Availability Options

## Introduction
15 16

Apache Atlas uses and interacts with a variety of systems to provide metadata management and data lineage to data
17
administrators. By choosing and configuring these dependencies appropriately, it is possible to achieve a high degree
18
of service availability with Atlas. This document describes the state of high availability support in Atlas,
19
including its capabilities and current limitations, and also the configuration required for achieving this level of
20 21
high availability.

22
[The architecture page](#/Architecture) in the wiki gives an overview of the various components that make up Atlas.
23 24 25
The options mentioned below for various components derive context from the above page, and would be worthwhile to
review before proceeding to read this page.

26
## Atlas Web Service
27

28 29 30
Currently, the Atlas Web Service has a limitation that it can only have one active instance at a time. In earlier
releases of Atlas, a backup instance could be provisioned and kept available. However, a manual failover was
required to make this backup instance active.
31

32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
From this release, Atlas will support multiple instances of the Atlas Web service in an active/passive configuration
with automated failover. This means that users can deploy and start multiple instances of the Atlas Web Service on
different physical hosts at the same time. One of these instances will be automatically selected as an 'active'
instance to service user requests. The others will automatically be deemed 'passive'. If the 'active' instance
becomes unavailable either because it is deliberately stopped, or due to unexpected failures, one of the other
instances will automatically be elected as an 'active' instance and start to service user requests.

An 'active' instance is the only instance that can respond to user requests correctly. It can create, delete, modify
or respond to queries on metadata objects. A 'passive' instance will accept user requests, but will redirect them
using HTTP redirect to the currently known 'active' instance. Specifically, a passive instance will not itself
respond to any queries on metadata objects. However, all instances (both active and passive), will respond to admin
requests that return information about that instance.

When configured in a High Availability mode, users can get the following operational benefits:

47 48
   * **Uninterrupted service during maintenance intervals**: If an active instance of the Atlas Web Service needs to be brought down for maintenance, another instance would automatically become active and can service requests.
   * **Uninterrupted service in event of unexpected failures**: If an active instance of the Atlas Web Service fails due to software or hardware errors, another instance would automatically become active and can service requests.
49 50 51 52 53

In the following sub-sections, we describe the steps required to setup High Availability for the Atlas Web Service.
We also describe how the deployment and client can be designed to take advantage of this capability.
Finally, we describe a few details of the underlying implementation.

54
### Setting up the High Availability feature in Atlas
55 56 57 58 59 60

The following pre-requisites must be met for setting up the High Availability feature.

   * Ensure that you install Apache Zookeeper on a cluster of machines (a minimum of 3 servers is recommended for production).
   * Select 2 or more physical machines to run the Atlas Web Service instances on. These machines define what we refer to as a 'server ensemble' for Atlas.

61

62
To setup High Availability in Atlas, a few configuration options must be defined in the `atlas-application.properties`
63
file. While the complete list of configuration items are defined in the [Configuration Page](#/Configuration), this
64 65
section lists a few of the main options.

66

67 68 69 70 71 72 73 74 75 76 77
* High Availability is an optional feature in Atlas. Hence, it must be enabled by setting the configuration option `atlas.server.ha.enabled` to true.


* Next, define a list of identifiers, one for each physical machine you have selected for the Atlas Web Service instance. These identifiers can be simple strings like `id1`, `id2` etc. They should be unique and should not contain a comma.


* Define a comma separated list of these identifiers as the value of the option `atlas.server.ids`.


* For each physical machine, list the IP Address/hostname and port as the value of the configuration `atlas.server.address.id`, where `id` refers to the identifier string for this physical machine.
   * For e.g., if you have selected 2 machines with hostnames `host1.company.com` and `host2.company.com`, you can define the configuration options as below:
78 79 80 81 82 83 84

<SyntaxHighlighter wrapLines={true} language="java" style={theme.dark}>
{`atlas.server.ids=id1,id2
atlas.server.address.id1=host1.company.com:21000
atlas.server.address.id2=host2.company.com:21000`}
</SyntaxHighlighter>

85
   * Define the Zookeeper quorum which will be used by the Atlas High Availability feature.
86 87

<SyntaxHighlighter wrapLines={true} language="java" style={theme.dark}>
88
      atlas.server.ha.zookeeper.connect=zk1.company.com:2181,zk2.company.com:2181,zk3.company.com:2181
89 90 91
    </SyntaxHighlighter>

   * You can review other configuration options that are defined for the High Availability feature, and set them up as desired in the `atlas-application.properties` file.
92 93
   * For production environments, the components that Atlas depends on must also be set up in High Availability mode. This is described in detail in the following sections. Follow those instructions to setup and configure them.
   * Install the Atlas software on the selected physical machines.
94
   * Copy the `atlas-application.properties` file created using the steps above to the configuration directory of all the machines.
95 96 97 98 99
   * Start the dependent components.
   * Start each instance of the Atlas Web Service.

To verify that High Availability is working, run the following script on each of the instances where Atlas Web Service
is installed.
100 101

<SyntaxHighlighter wrapLines={true} language="java" style={theme.dark}>
102
$ATLAS_HOME/bin/atlas_admin.py -status
103
</SyntaxHighlighter>
104 105
This script can print one of the values below as response:

106 107 108 109
   * **ACTIVE**: This instance is active and can respond to user requests.
   * **PASSIVE**: This instance is PASSIVE. It will redirect any user requests it receives to the current active instance.
   * **BECOMING_ACTIVE**: This would be printed if the server is transitioning to become an ACTIVE instance. The server cannot service any metadata user requests in this state.
   * **BECOMING_PASSIVE**: This would be printed if the server is transitioning to become a PASSIVE instance. The server cannot service any metadata user requests in this state.
110 111 112 113

Under normal operating circumstances, only one of these instances should print the value *ACTIVE* as response to
the script, and the others would print *PASSIVE*.

114
### Configuring clients to use the High Availability feature
115 116 117

The Atlas Web Service can be accessed in two ways:

118
   * **Using the Atlas Web UI**: This is a browser based client that can be used to query the metadata stored in Atlas.
119
   * **Using the Atlas REST API**: As Atlas exposes a RESTful API, one can use any standard REST client including libraries in other applications. In fact, Atlas ships with a client called AtlasClient that can be used as an example to build REST client access.
120 121 122

In order to take advantage of the High Availability feature in the clients, there are two options possible.

123
#### Using an intermediate proxy
124 125

The simplest solution to enable highly available access to Atlas is to install and configure some intermediate proxy
126
that has a capability to transparently switch services based on status. One such proxy solution is [HAProxy](http://www.haproxy.org/).
127 128 129 130

Here is an example HAProxy configuration that can be used. Note this is provided for illustration only, and not as a
recommended production configuration. For that, please refer to the HAProxy documentation for appropriate instructions.

131 132
<SyntaxHighlighter wrapLines={true} language="java" style={theme.dark}>
{`frontend atlas_fe
133 134 135 136 137 138 139 140 141 142
  bind *:41000
  default_backend atlas_be
backend atlas_be
  mode http
  option httpchk get /api/atlas/admin/status
  http-check expect string ACTIVE
  balance roundrobin
  server host1_21000 host1:21000 check
  server host2_21000 host2:21000 check backup
listen atlas
143 144
  bind localhost:42000`}
</SyntaxHighlighter>
145 146 147

The above configuration binds HAProxy to listen on port 41000 for incoming client connections. It then routes
the connections to either of the hosts host1 or host2 depending on a HTTP status check. The status check is
148
done using a HTTP GET on the REST URL `/api/atlas/admin/status`, and is deemed successful only if the HTTP response
149 150
contains the string ACTIVE.

151
#### Using automatic detection of active instance
152 153 154 155

If one does not want to setup and manage a separate proxy, then the other option to use the High Availability
feature is to build a client application that is capable of detecting status and retrying operations. In such a
setting, the client application can be launched with the URLs of all Atlas Web Service instances that form the
156 157
ensemble. The client should then call the REST URL `/api/atlas/admin/status` on each of these to determine which is
the active instance. The response from the Active instance would be of the form `{Status:ACTIVE}`. Also, when the
158 159 160
client faces any exceptions in the course of an operation, it should again determine which of the remaining URLs
is active and retry the operation.

161
The AtlasClient class that ships with Atlas can be used as an example client library that implements the logic
162 163
for working with an ensemble and selecting the right Active server instance.

164 165
Utilities in Atlas, like `quick_start.py` and `import-hive.sh` can be configured to run with multiple server
URLs. When launched in this mode, the AtlasClient automatically selects and works with the current active instance.
166 167
If a proxy is set up in between, then its address can be used when running quick_start.py or import-hive.sh.

168
### Implementation Details of Atlas High Availability
169

170 171
The Atlas High Availability work is tracked under the master JIRA
[ATLAS-510](https://issues.apache.org/jira/browse/ATLAS-510).
172 173 174 175
The JIRAs filed under it have detailed information about how the High Availability feature has been implemented.
At a high level the following points can be called out:

   * The automatic selection of an Active instance, as well as automatic failover to a new Active instance happen through a leader election algorithm.
176
   * For leader election, we use the [Leader Latch Recipe](http://curator.apache.org/curator-recipes/leader-latch.html) of [Apache Curator](http://curator.apache.org)
177 178 179
   * The Active instance is the only one which initializes, modifies or reads state in the backend stores to keep them consistent.
   * Also, when an instance is elected as Active, it refreshes any cached information from the backend stores to get up to date.
   * A servlet filter ensures that only the active instance services user requests. If a passive instance receives these requests, it automatically redirects them to the current active instance.
180

181
## Metadata Store
182

183 184 185
As described above, Atlas uses JanusGraph to store the metadata it manages. By default, Atlas uses a standalone HBase
instance as the backing store for JanusGraph. In order to provide HA for the metadata store, we recommend that Atlas be
configured to use distributed HBase as the backing store for JanusGraph.  Doing this implies that you could benefit from the
186
HA guarantees HBase provides. In order to configure Atlas to use HBase in HA mode, do the following:
187

188
   * Choose an existing HBase cluster that is set up in HA mode to configure in Atlas (OR) Set up a new HBase cluster in [HA mode](http://hbase.apache.org/book.html#quickstart_fully_distributed).
189
      * If setting up HBase for Atlas, please following instructions listed for setting up HBase in the [Installation Steps](#/Installation).
190
   * We recommend using more than one HBase masters (at least 2) in the cluster on different physical hosts that use Zookeeper for coordination to provide redundancy and high availability of HBase.
191
      * Refer to the [Configuration page](#/Configuration) for the options to configure in atlas.properties to setup Atlas with HBase.
192

193
## Index Store
194

195
As described above, Atlas indexes metadata through JanusGraph to support full text search queries. In order to provide HA
196 197
for the index store, we recommend that Atlas be configured to use Solr or Elasticsearch as the backing index store for JanusGraph.

198
### Solr
199
In order to configure Atlas to use Solr in HA mode, do the following:
200

201
   * Choose an existing SolrCloud cluster setup in HA mode to configure in Atlas (OR) Set up a new [SolrCloud cluster](https://cwiki.apache.org/confluence/display/solr/SolrCloud).
202 203
      * Ensure Solr is brought up on at least 2 physical hosts for redundancy, and each host runs a Solr node.
      * We recommend the number of replicas to be set to at least 2 for redundancy.
204
   * Create the SolrCloud collections required by Atlas, as described in [Installation Steps](#/Installation)
205
   * Refer to the [Configuration page](#/Configuration) for the options to configure in atlas.properties to setup Atlas with Solr.
206

207
### Elasticsearch  (Tech Preview)
208 209
In order to configure Atlas to use Elasticsearch in HA mode, do the following:

210
   * Choose an existing Elasticsearch cluster setup, (OR) setup a new cluster [Elasticsearch cluster](https://www.elastic.co/guide/en/elasticsearch/reference/5.6/setup.html).
211 212
      * Ensure that Elasticsearch is brought up on at least five physical hosts for redundancy.
      * A replica count of 3 is recommended
213
   * Refer to the [Configuration page](#/Configuration) for the options to configure in atlas.properties to setup Atlas with Elasticsearch.
214

215
## Notification Server
216

217 218
Metadata notification events from Hooks are sent to Atlas by writing them to a Kafka topic called **ATLAS_HOOK**. Similarly, events from
Atlas to other integrating components like Ranger, are written to a Kafka topic called **ATLAS_ENTITIES**. Since Kafka
219 220 221 222
persists these messages, the events will not be lost even if the consumers are down as the events are being sent. In
addition, we recommend Kafka is also setup for fault tolerance so that it has higher availability guarantees. In order
to configure Atlas to use Kafka in HA mode, do the following:

223 224
* Choose an existing Kafka cluster set up in HA mode to configure in Atlas (OR) Set up a new Kafka cluster.

225

226 227 228 229 230 231 232 233 234 235 236 237 238 239
* We recommend that there are more than one Kafka brokers in the cluster on different physical hosts that use Zookeeper for coordination to provide redundancy and high availability of Kafka.
   * Setup at least 2 physical hosts for redundancy, each hosting a Kafka broker.


* Set up Kafka topics for Atlas usage:
   * The number of partitions for the ATLAS topics should be set to 1 (numPartitions)
   * Decide number of replicas for Kafka topic: Set this to at least 2 for redundancy.
   * Run the following commands:

<SyntaxHighlighter wrapLines={true} language="java" style={theme.dark}>
   {`$KAFKA_HOME/bin/kafka-topics.sh --create --zookeeper <list of zookeeper host:port entries> --topic ATLAS_HOOK --replication-factor <numReplicas> --partitions 1
   $KAFKA_HOME/bin/kafka-topics.sh --create --zookeeper <list of zookeeper host:port entries> --topic ATLAS_ENTITIES --replication-factor <numReplicas> --partitions 1
   Here KAFKA_HOME points to the Kafka installation directory.`}
</SyntaxHighlighter>
240

241
   * In atlas-application.properties, set the following configuration:
242

243 244 245 246 247
<SyntaxHighlighter wrapLines={true} language="java" style={theme.dark}>
   {`atlas.notification.embedded=false
   atlas.kafka.zookeeper.connect=<comma separated list of servers forming Zookeeper quorum used by Kafka>
   atlas.kafka.bootstrap.servers=<comma separated list of Kafka broker endpoints in host:port form> - Give at least 2 for redundancy.`}
</SyntaxHighlighter>
248 249

## Known Issues
250

251
   * If the HBase region servers hosting the Atlas table are down, Atlas would not be able to store or retrieve metadata from HBase until they are brought back online.