Commit 6d907bd0 by kevalbhatt

ATLAS-3429: Atlas documentation redirect using hash router

parent ea02e10c
...@@ -92,7 +92,7 @@ or ...@@ -92,7 +92,7 @@ or
**[Atlas 1.1.0](../1.1.0/index) (Released on 2018/09/17)** **[Atlas 1.1.0](../1.1.0/index) (Released on 2018/09/17)**
* Updated authorization model to support access control on relationship operations * Updated authorization model to support access control on relationship operations
* Added support for AWS S3 datatypes, in Atlas server and Hive hook * Added support for AWS S3 datatypes, in Atlas server and Hive hook
* Updated [[http://atlas.apache.org/JanusGraph.html][JanusGraph]] version from 0.2.0 to 0.3.0 * Updated [JanusGraph](https://janusgraph.org/) version from 0.2.0 to 0.3.0
* Updated hooks to send Kafka notifications asynchronously * Updated hooks to send Kafka notifications asynchronously
* Enhanced classification-propagation with options to handle entity-deletes * Enhanced classification-propagation with options to handle entity-deletes
* BugFixes and Optimizations * BugFixes and Optimizations
......
...@@ -40,7 +40,7 @@ the term gets deleted as well. A term can belong to zero or more categories, whi ...@@ -40,7 +40,7 @@ the term gets deleted as well. A term can belong to zero or more categories, whi
wider contexts. A term can be assigned/linked to zero or more entities in Apache Atlas. A term can be classified using wider contexts. A term can be assigned/linked to zero or more entities in Apache Atlas. A term can be classified using
classifications (tags) and the same classification gets applied to the entities that the term is assigned to. classifications (tags) and the same classification gets applied to the entities that the term is assigned to.
###What is a Glossary category ? ### What is a Glossary category ?
A category is a way of organizing the term(s) so that the term's context can be enriched. A category may or may not have A category is a way of organizing the term(s) so that the term's context can be enriched. A category may or may not have
contained hierarchies i.e. child category hierarchy. A category's qualifiedName is derived using it's hierarchical location contained hierarchies i.e. child category hierarchy. A category's qualifiedName is derived using it's hierarchical location
...@@ -221,6 +221,7 @@ Following operations are supported by Atlas, the details of REST interface can b ...@@ -221,6 +221,7 @@ Following operations are supported by Atlas, the details of REST interface can b
##### JSON structure ##### JSON structure
* Glossary * Glossary
<SyntaxHighlighter wrapLines={true} language="json" style={theme.dark}> <SyntaxHighlighter wrapLines={true} language="json" style={theme.dark}>
{`{ {`{
"guid": "2f341934-f18c-48b3-aa12-eaa0a2bfce85", "guid": "2f341934-f18c-48b3-aa12-eaa0a2bfce85",
...@@ -267,6 +268,7 @@ Following operations are supported by Atlas, the details of REST interface can b ...@@ -267,6 +268,7 @@ Following operations are supported by Atlas, the details of REST interface can b
</SyntaxHighlighter> </SyntaxHighlighter>
* Term * Term
<SyntaxHighlighter wrapLines={true} language="json" style={theme.dark}> <SyntaxHighlighter wrapLines={true} language="json" style={theme.dark}>
{`{ {`{
"guid": "e441a540-ee55-4fc8-8eaf-4b9943d8929c", "guid": "e441a540-ee55-4fc8-8eaf-4b9943d8929c",
...@@ -303,6 +305,7 @@ Following operations are supported by Atlas, the details of REST interface can b ...@@ -303,6 +305,7 @@ Following operations are supported by Atlas, the details of REST interface can b
</SyntaxHighlighter> </SyntaxHighlighter>
* Category * Category
<SyntaxHighlighter wrapLines={true} language="json" style={theme.dark}> <SyntaxHighlighter wrapLines={true} language="json" style={theme.dark}>
{`{ {`{
"guid": "7f041401-de8c-443f-a3b7-7bf5a910ff6f", "guid": "7f041401-de8c-443f-a3b7-7bf5a910ff6f",
......
...@@ -19,7 +19,7 @@ of service availability with Atlas. This document describes the state of high av ...@@ -19,7 +19,7 @@ of service availability with Atlas. This document describes the state of high av
including its capabilities and current limitations, and also the configuration required for achieving this level of including its capabilities and current limitations, and also the configuration required for achieving this level of
high availability. high availability.
[The architecture page](Architecture) in the wiki gives an overview of the various components that make up Atlas. [The architecture page](#/Architecture) in the wiki gives an overview of the various components that make up Atlas.
The options mentioned below for various components derive context from the above page, and would be worthwhile to The options mentioned below for various components derive context from the above page, and would be worthwhile to
review before proceeding to read this page. review before proceeding to read this page.
...@@ -59,7 +59,7 @@ The following pre-requisites must be met for setting up the High Availability fe ...@@ -59,7 +59,7 @@ The following pre-requisites must be met for setting up the High Availability fe
* Select 2 or more physical machines to run the Atlas Web Service instances on. These machines define what we refer to as a 'server ensemble' for Atlas. * Select 2 or more physical machines to run the Atlas Web Service instances on. These machines define what we refer to as a 'server ensemble' for Atlas.
To setup High Availability in Atlas, a few configuration options must be defined in the `atlas-application.properties` To setup High Availability in Atlas, a few configuration options must be defined in the `atlas-application.properties`
file. While the complete list of configuration items are defined in the [Configuration Page](Configuration), this file. While the complete list of configuration items are defined in the [Configuration Page](#/Configuration), this
section lists a few of the main options. section lists a few of the main options.
* High Availability is an optional feature in Atlas. Hence, it must be enabled by setting the configuration option `atlas.server.ha.enabled` to true. * High Availability is an optional feature in Atlas. Hence, it must be enabled by setting the configuration option `atlas.server.ha.enabled` to true.
...@@ -179,9 +179,9 @@ configured to use distributed HBase as the backing store for JanusGraph. Doing ...@@ -179,9 +179,9 @@ configured to use distributed HBase as the backing store for JanusGraph. Doing
HA guarantees HBase provides. In order to configure Atlas to use HBase in HA mode, do the following: HA guarantees HBase provides. In order to configure Atlas to use HBase in HA mode, do the following:
* Choose an existing HBase cluster that is set up in HA mode to configure in Atlas (OR) Set up a new HBase cluster in [HA mode](http://hbase.apache.org/book.html#quickstart_fully_distributed). * Choose an existing HBase cluster that is set up in HA mode to configure in Atlas (OR) Set up a new HBase cluster in [HA mode](http://hbase.apache.org/book.html#quickstart_fully_distributed).
* If setting up HBase for Atlas, please following instructions listed for setting up HBase in the [Installation Steps](InstallationSteps). * If setting up HBase for Atlas, please following instructions listed for setting up HBase in the [Installation Steps](#/Installation).
* We recommend using more than one HBase masters (at least 2) in the cluster on different physical hosts that use Zookeeper for coordination to provide redundancy and high availability of HBase. * We recommend using more than one HBase masters (at least 2) in the cluster on different physical hosts that use Zookeeper for coordination to provide redundancy and high availability of HBase.
* Refer to the [Configuration page](Configuration) for the options to configure in atlas.properties to setup Atlas with HBase. * Refer to the [Configuration page](#/Configuration) for the options to configure in atlas.properties to setup Atlas with HBase.
## Index Store ## Index Store
...@@ -194,8 +194,8 @@ In order to configure Atlas to use Solr in HA mode, do the following: ...@@ -194,8 +194,8 @@ In order to configure Atlas to use Solr in HA mode, do the following:
* Choose an existing !SolrCloud cluster setup in HA mode to configure in Atlas (OR) Set up a new [SolrCloud cluster](https://cwiki.apache.org/confluence/display/solr/SolrCloud). * Choose an existing !SolrCloud cluster setup in HA mode to configure in Atlas (OR) Set up a new [SolrCloud cluster](https://cwiki.apache.org/confluence/display/solr/SolrCloud).
* Ensure Solr is brought up on at least 2 physical hosts for redundancy, and each host runs a Solr node. * Ensure Solr is brought up on at least 2 physical hosts for redundancy, and each host runs a Solr node.
* We recommend the number of replicas to be set to at least 2 for redundancy. * We recommend the number of replicas to be set to at least 2 for redundancy.
* Create the !SolrCloud collections required by Atlas, as described in [Installation Steps](InstallationSteps) * Create the !SolrCloud collections required by Atlas, as described in [Installation Steps](#/Installation)
* Refer to the [Configuration page](Configuration) for the options to configure in atlas.properties to setup Atlas with Solr. * Refer to the [Configuration page](#/Configuration) for the options to configure in atlas.properties to setup Atlas with Solr.
### Elasticsearch (Tech Preview) ### Elasticsearch (Tech Preview)
In order to configure Atlas to use Elasticsearch in HA mode, do the following: In order to configure Atlas to use Elasticsearch in HA mode, do the following:
...@@ -203,7 +203,7 @@ In order to configure Atlas to use Elasticsearch in HA mode, do the following: ...@@ -203,7 +203,7 @@ In order to configure Atlas to use Elasticsearch in HA mode, do the following:
* Choose an existing Elasticsearch cluster setup, (OR) setup a new cluster [Elasticsearch cluster](https://www.elastic.co/guide/en/elasticsearch/reference/5.6/setup.html). * Choose an existing Elasticsearch cluster setup, (OR) setup a new cluster [Elasticsearch cluster](https://www.elastic.co/guide/en/elasticsearch/reference/5.6/setup.html).
* Ensure that Elasticsearch is brought up on at least five physical hosts for redundancy. * Ensure that Elasticsearch is brought up on at least five physical hosts for redundancy.
* A replica count of 3 is recommended * A replica count of 3 is recommended
* Refer to the [Configuration page](Configuration) for the options to configure in atlas.properties to setup Atlas with Elasticsearch. * Refer to the [Configuration page](#/Configuration) for the options to configure in atlas.properties to setup Atlas with Elasticsearch.
## Notification Server ## Notification Server
......
...@@ -55,11 +55,11 @@ notification events. Events are written by the hooks and Atlas to different Kafk ...@@ -55,11 +55,11 @@ notification events. Events are written by the hooks and Atlas to different Kafk
### Metadata sources ### Metadata sources
Atlas supports integration with many sources of metadata out of the box. More integrations will be added in future Atlas supports integration with many sources of metadata out of the box. More integrations will be added in future
as well. Currently, Atlas supports ingesting and managing metadata from the following sources: as well. Currently, Atlas supports ingesting and managing metadata from the following sources:
* [HBase](/Hook-HBase) * [HBase](#/HookHBase)
* [Hive](/Hook-Hive) * [Hive](#/HookHive)
* [Sqoop](/Hook-Sqoop) * [Sqoop](#/HookSqoop)
* [Storm](/Hook-Storm) * [Storm](#/HookStorm)
* [Kafka](/Hook-Kafka) * [Kafka](#/HookKafka)
The integration implies two things: The integration implies two things:
There are metadata models that Atlas defines natively to represent objects of these components. There are metadata models that Atlas defines natively to represent objects of these components.
......
--- ---
name: Falcon name: Falcon
route: /Hook-Falcon route: /HookFalcon
menu: Documentation menu: Documentation
submenu: Hooks submenu: Hooks
--- ---
...@@ -62,7 +62,7 @@ The following properties in `<atlas-conf>`/atlas-application.properties control ...@@ -62,7 +62,7 @@ The following properties in `<atlas-conf>`/atlas-application.properties control
* atlas.hook.falcon.keepAliveTime - keep alive time in msecs. default 10 * atlas.hook.falcon.keepAliveTime - keep alive time in msecs. default 10
* atlas.hook.falcon.queueSize - queue size for the threadpool. default 10000 * atlas.hook.falcon.queueSize - queue size for the threadpool. default 10000
Refer [Configuration](Configuration) for notification related configurations Refer [Configuration](#/Configuration) for notification related configurations
## NOTES ## NOTES
......
--- ---
name: HBase name: HBase
route: /Hook-HBase route: /HookHBase
menu: Documentation menu: Documentation
submenu: Hooks submenu: Hooks
--- ---
......
--- ---
name: Hive name: Hive
route: /Hook-Hive route: /HookHive
menu: Documentation menu: Documentation
submenu: Hooks submenu: Hooks
--- ---
......
--- ---
name: Kafka name: Kafka
route: /Hook-Kafka route: /HookKafka
menu: Documentation menu: Documentation
submenu: Hooks submenu: Hooks
--- ---
......
--- ---
name: Sqoop name: Sqoop
route: /Hook-Sqoop route: /HookSqoop
menu: Documentation menu: Documentation
submenu: Hooks submenu: Hooks
--- ---
......
--- ---
name: Storm name: Storm
route: /Hook-Storm route: /HookStorm
menu: Documentation menu: Documentation
submenu: Hooks submenu: Hooks
--- ---
......
--- ---
name: Export API name: Export API
route: /Export-API route: /ExportAPI
menu: Documentation menu: Documentation
submenu: Import/Export submenu: Import/Export
--- ---
...@@ -15,7 +15,7 @@ The general approach is: ...@@ -15,7 +15,7 @@ The general approach is:
* The API if successful, will return the stream in the format specified. * The API if successful, will return the stream in the format specified.
* Error will be returned on failure of the call. * Error will be returned on failure of the call.
See [here](http://atlas.apache.org/Export-HDFS-API.html) for details on exporting *hdfs_path* entities. See [here](#/ExportHDFSAPI) for details on exporting *hdfs_path* entities.
|**Title**|**Export API**| |**Title**|**Export API**|
| ------------ | ------------ | | ------------ | ------------ |
...@@ -49,11 +49,11 @@ Current implementation has 2 options. Both are optional: ...@@ -49,11 +49,11 @@ Current implementation has 2 options. Both are optional:
* _fetchType_ This option configures the approach used for fetching entities. It has following values: * _fetchType_ This option configures the approach used for fetching entities. It has following values:
* _FULL_: This fetches all the entities that are connected directly and indirectly to the starting entity. E.g. If a starting entity specified is a table, then this option will fetch the table, database and all the other tables within the database. * _FULL_: This fetches all the entities that are connected directly and indirectly to the starting entity. E.g. If a starting entity specified is a table, then this option will fetch the table, database and all the other tables within the database.
* _CONNECTED_: This fetches all the etnties that are connected directly to the starting entity. E.g. If a starting entity specified is a table, then this option will fetch the table and the database entity only. * _CONNECTED_: This fetches all the etnties that are connected directly to the starting entity. E.g. If a starting entity specified is a table, then this option will fetch the table and the database entity only.
* _INCREMENTAL_: See [here](http://atlas.apache.org/Incremental-Export.html) for details. * _INCREMENTAL_: See [here](#/IncrementalExport) for details.
If no _matchType_ is specified, exact match is used. Which means, that the entire string is used in the search criteria. If no _matchType_ is specified, exact match is used. Which means, that the entire string is used in the search criteria.
Searching using _matchType_ applies for all types of entities. It is particularly useful for matching entities of type hdfs_path (see (here)[Export-HDFS-API]). Searching using _matchType_ applies for all types of entities. It is particularly useful for matching entities of type hdfs_path (see [here](#/ExportHDFSAPI)).
The _fetchType_ option defaults to _FULL_. The _fetchType_ option defaults to _FULL_.
......
--- ---
name: Export HDFS API name: Export HDFS API
route: /Export-HDFS-API route: /ExportHDFSAPI
menu: Documentation menu: Documentation
submenu: Import/Export submenu: Import/Export
--- ---
...@@ -30,7 +30,7 @@ __Sample HDFS Setup__ ...@@ -30,7 +30,7 @@ __Sample HDFS Setup__
### Export API Using matchType ### Export API Using matchType
To export entities that represent HDFS path, use the Export API using the _matchType_ option. Details can be found [here](Export-API). To export entities that represent HDFS path, use the Export API using the _matchType_ option. Details can be found [here](#/ExportAPI).
### Example Using CURL Calls ### Example Using CURL Calls
Below are sample CURL calls that performs export operation on the _Sample HDFS Setup_ shown above. Below are sample CURL calls that performs export operation on the _Sample HDFS Setup_ shown above.
......
--- ---
name: Import API name: Import API
route: /Import-API route: /ImportAPI
menu: Documentation menu: Documentation
submenu: Import/Export submenu: Import/Export
--- ---
...@@ -63,7 +63,7 @@ __Method Signature for Import File__ ...@@ -63,7 +63,7 @@ __Method Signature for Import File__
</SyntaxHighlighter> </SyntaxHighlighter>
__Import Options__ __Import Options__
Please see [here](Import-API-Options) for the available options during import process. Please see [here](#/ImportAPIOptions) for the available options during import process.
__AtlasImportResult Response__ __AtlasImportResult Response__
The API will return the results of the import operation in the format defined by the _AtlasImportResult_: The API will return the results of the import operation in the format defined by the _AtlasImportResult_:
......
--- ---
name: Import API Options name: Import API Options
route: /Import-API-Options route: /ImportAPIOptions
menu: Documentation menu: Documentation
submenu: Import/Export submenu: Import/Export
--- ---
......
--- ---
name: Import Export API name: Import Export API
route: /Import-Export-API route: /ImportExportAPI
menu: Documentation menu: Documentation
submenu: Import/Export submenu: Import/Export
--- ---
...@@ -13,13 +13,13 @@ import SyntaxHighlighter from 'react-syntax-highlighter'; ...@@ -13,13 +13,13 @@ import SyntaxHighlighter from 'react-syntax-highlighter';
### What's New ### What's New
The release of 0.8.3 includes the following improvements to Export and Import APIs: The release of 0.8.3 includes the following improvements to Export and Import APIs:
* Export: Support for [Incremental Export](Incremental-Export). * Export: Support for [Incremental Export](#/IncrementalExport).
* Export & Import: Support for [replicated attributes](ReplicatedToFromAttributes) to entities made possible by [SoftReference](SoftReference) entity attribute option. * Export & Import: Support for [replicated attributes](#/ReplicatedAttributes) to entities made possible by [SoftReference](#/SoftReference) entity attribute option.
* Export option: [skipLineage](skipLineage). * Export option: [skipLineage](#/IncrementalExport).
* New entity transforms framework. * New entity transforms framework.
* New [AtlasServer](AtlasServer) entity type. * New [AtlasServer](#/AtlasServer) entity type.
* Export: [Automatic creation of HDFS path](Export-HDFS-API) requested entities. * Export: [Automatic creation of HDFS path](#/ExportHDFSAPI) requested entities.
* New [ExportImportAudits](ExportImportAudits) for Export & Import operations. * New [ExportImportAudits](#/ExportImportAudits) for Export & Import operations.
### Background ### Background
The Import-Export APIs for Atlas facilitate transfer of data to and from a cluster that has Atlas provisioned. The Import-Export APIs for Atlas facilitate transfer of data to and from a cluster that has Atlas provisioned.
...@@ -33,7 +33,7 @@ The APIs are available only to _admin_ user. ...@@ -33,7 +33,7 @@ The APIs are available only to _admin_ user.
Only a single import or export operation can be performed at a given time. The operations have a potential for generating large amount. They can also put pressure on resources. This restriction tries to alleviate this problem. Only a single import or export operation can be performed at a given time. The operations have a potential for generating large amount. They can also put pressure on resources. This restriction tries to alleviate this problem.
For Import-Export APIs relating to HDFS path, can be found [here](Import-Export-HDFS-Path). For Import-Export APIs relating to HDFS path, can be found [here](#/ExportHDFSAPI).
For additional information please refer to the following: For additional information please refer to the following:
* [ATLAS-1503](https://issues.apache.org/jira/browse/ATLAS-1503) Original Import-Export API requirements. * [ATLAS-1503](https://issues.apache.org/jira/browse/ATLAS-1503) Original Import-Export API requirements.
...@@ -49,8 +49,8 @@ If an import or export operation is initiated while another is in progress, the ...@@ -49,8 +49,8 @@ If an import or export operation is initiated while another is in progress, the
Unhandled errors will be returned as Internal error code 500. Unhandled errors will be returned as Internal error code 500.
### REST API Reference ### REST API Reference
* [Export](Export-API) * [Export](#/ExportAPI)
* [Export HDFS](Export-HDFS-API) * [Export HDFS](#/ExportHDFSAPI)
* [Import](Import-API) * [Import](#/ImportAPI)
* [Import Options](Import-API-Options) * [Import Options](#/ImportAPIOptions)
--- ---
name: Incremental Export name: Incremental Export
route: /Incremental-Export route: /IncrementalExport
menu: Documentation menu: Documentation
submenu: Import/Export submenu: Import/Export
--- ---
......
...@@ -52,15 +52,15 @@ capabilities around these data assets for data scientists, analysts and the data ...@@ -52,15 +52,15 @@ capabilities around these data assets for data scientists, analysts and the data
## Getting Started ## Getting Started
* [What's new in Apache Atlas 2.0?](WhatsNew-2.0) * [What's new in Apache Atlas 2.0?](#/WhatsNew-2.0)
* [Build & Install](InstallationSteps) * [Build & Install](#/Installation)
* [Quick Start](QuickStart) * [Quick Start](#/QuickStart)
## API Documentation ## API Documentation
* <a href="api/v2/index.html">REST API Documentation</a> * <a href="api/v2/index.html">REST API Documentation</a>
* [Export & Import REST API Documentation](Import-Export-API) * [Export & Import REST API Documentation](#/ImportExportAPI)
* <a href="../api/rest.html">Legacy API Documentation</a> * <a href="../api/rest.html">Legacy API Documentation</a>
## Developer Setup Documentation ## Developer Setup Documentation
* [Developer Setup: Eclipse](EclipseSetup) * [Developer Setup: Eclipse](#/EclipseSetup)
...@@ -17,7 +17,7 @@ The _AtlasServer_ entity type is special entity type in following ways: ...@@ -17,7 +17,7 @@ The _AtlasServer_ entity type is special entity type in following ways:
* Gets created during Export or Import operation. * Gets created during Export or Import operation.
* It also has special property pages that display detailed audits for export and import operations. * It also has special property pages that display detailed audits for export and import operations.
* Entities are linked to it using the new option within entity's attribute _[SoftReference](SoftReference)_. * Entities are linked to it using the new option within entity's attribute _[SoftReference](#/SoftReference)_.
The new type is available within the _Search By Type_ dropdown in both _Basic_ and _Advanced_ search. The new type is available within the _Search By Type_ dropdown in both _Basic_ and _Advanced_ search.
...@@ -76,7 +76,7 @@ The _AtlasServer_ will handle this and set its name as 'cl1' and _fullName_ as ' ...@@ -76,7 +76,7 @@ The _AtlasServer_ will handle this and set its name as 'cl1' and _fullName_ as '
This property in _AtlasServer_ is a map with key and value both as String. This can be used to store any information pertaining to this instance. This property in _AtlasServer_ is a map with key and value both as String. This can be used to store any information pertaining to this instance.
Please see [Incremental Export](IncrementalExport) for and example of how this property can be used. Please see [Incremental Export](#/IncrementalExport) for and example of how this property can be used.
#### REST APIs #### REST APIs
**Title** |**Atlas Server API** | **Title** |**Atlas Server API** |
......
...@@ -4,6 +4,7 @@ route: /ReplicatedAttributes ...@@ -4,6 +4,7 @@ route: /ReplicatedAttributes
menu: Documentation menu: Documentation
submenu: Misc submenu: Misc
--- ---
# Replicated Attributes # Replicated Attributes
#### Background #### Background
......
...@@ -5,7 +5,8 @@ menu: Documentation ...@@ -5,7 +5,8 @@ menu: Documentation
submenu: Misc submenu: Misc
--- ---
import { dark } from 'react-syntax-highlighter/dist/esm/styles/prism'; import themen from 'theme/styles/styled-colors';
import * as theme from 'react-syntax-highlighter/dist/esm/styles/hljs';
import SyntaxHighlighter from 'react-syntax-highlighter'; import SyntaxHighlighter from 'react-syntax-highlighter';
# Entity Attribute Option: SoftReference # Entity Attribute Option: SoftReference
...@@ -24,7 +25,7 @@ Attribute with _isSoftReference_ option set to _true_, is non-primitive attribut ...@@ -24,7 +25,7 @@ Attribute with _isSoftReference_ option set to _true_, is non-primitive attribut
Below is an example of using the new attribute option. Below is an example of using the new attribute option.
<SyntaxHighlighter wrapLines={true} language="json" style={dark}> <SyntaxHighlighter wrapLines={true} language="json" style={theme.dark}>
{`"attributeDefs": [ {`"attributeDefs": [
{ {
"name": "replicatedFrom", "name": "replicatedFrom",
...@@ -36,5 +37,5 @@ Below is an example of using the new attribute option. ...@@ -36,5 +37,5 @@ Below is an example of using the new attribute option.
"options": { "options": {
"isSoftReference": "true" "isSoftReference": "true"
} }
}`} },...]`}
</SyntaxHighlighter> </SyntaxHighlighter>
--- ---
name: Issue Tracking name: Issue Tracking
route: /Issue-Tracking route: /IssueTracking
menu: Project Info menu: Project Info
submenu: Issue Tracking submenu: Issue Tracking
--- ---
......
--- ---
name: Mailing Lists name: Mailing Lists
route: /Mailing-Lists route: /MailingLists
menu: Project Info menu: Project Info
submenu: Mailing Lists submenu: Mailing Lists
--- ---
......
--- ---
name: Project Information name: Project Information
route: /Project-Info route: /ProjectInfo
menu: Project Info menu: Project Info
submenu: Project Information submenu: Project Information
--- ---
...@@ -14,9 +14,9 @@ submenu: Project Information ...@@ -14,9 +14,9 @@ submenu: Project Information
# Overview # Overview
|Document|Description| |Document|Description|
|:----|:----| |:----|:----|
|[About](/)|Apache Atlas Documentation| |[About](#/)|Apache Atlas Documentation|
|[Project Team](/Team-List)|This document provides information on the members of this project. These are the individuals who have contributed to the project in one form or another.| |[Project Team](#/TeamList)|This document provides information on the members of this project. These are the individuals who have contributed to the project in one form or another.|
|[Mailing Lists](/Mailing-Lists)|This document provides subscription and archive information for this project's mailing lists.| |[Mailing Lists](#/MailingLists)|This document provides subscription and archive information for this project's mailing lists.|
|[Issue Tracking](/Issue-Tracking)|This is a link to the issue management system for this project. Issues (bugs, features, change requests) can be created and queried using this link.| |[Issue Tracking](#/IssueTracking)|This is a link to the issue management system for this project. Issues (bugs, features, change requests) can be created and queried using this link.|
|[Project License](/Project-License)|This is a link to the definitions of project licenses.| |[Project License](#/ProjectLicense)|This is a link to the definitions of project licenses.|
|[Source Repository](/Source-Repository)|This is a link to the online source repository that can be viewed via a web browser.| |[Source Repository](#/SourceRepository)|This is a link to the online source repository that can be viewed via a web browser.|
--- ---
name: License name: License
route: /Project-License route: /ProjectLicense
menu: Project Info menu: Project Info
submenu: License submenu: License
--- ---
......
--- ---
name: Source Repository name: Source Repository
route: /Source-Repository route: /SourceRepository
menu: Project Info menu: Project Info
submenu: Source Repository submenu: Source Repository
--- ---
......
--- ---
name: Team List name: Team List
route: /Team-List route: /TeamList
menu: Project Info menu: Project Info
submenu: Team List submenu: Team List
--- ---
......
--- ---
name: REST API name: REST API
route: /rest-api route: /RestApi
menu: Documentation menu: Documentation
submenu: REST API submenu: REST API
--- ---
......
--- ---
name: Advance Search name: Advance Search
route: /Search-Advance route: /SearchAdvance
menu: Documentation menu: Documentation
submenu: Search submenu: Search
--- ---
......
--- ---
name: Basic Search name: Basic Search
route: /Search-Basic route: /SearchBasic
menu: Documentation menu: Documentation
submenu: Search submenu: Search
--- ---
......
...@@ -16,7 +16,7 @@ import Img from 'theme/components/shared/Img' ...@@ -16,7 +16,7 @@ import Img from 'theme/components/shared/Img'
## Setting up Apache Atlas to use Apache Ranger Authorization ## Setting up Apache Atlas to use Apache Ranger Authorization
As detailed in [Atlas Authorization Model](http://atlas.apache.org//Atlas-Authorization-Model.html), Apache Atlas supports pluggable authorization As detailed in [Atlas Authorization Model](#/AuthorizationModel), Apache Atlas supports pluggable authorization
model. Apache Ranger provides an authorizer implementation that uses Apache Ranger policies for authorization. In model. Apache Ranger provides an authorizer implementation that uses Apache Ranger policies for authorization. In
addition, the authorizer provided by Apache Ranger audits all authorizations into a central audit store. addition, the authorizer provided by Apache Ranger audits all authorizations into a central audit store.
......
...@@ -13,7 +13,7 @@ import SyntaxHighlighter from 'react-syntax-highlighter'; ...@@ -13,7 +13,7 @@ import SyntaxHighlighter from 'react-syntax-highlighter';
## Setting up Atlas to use Simple Authorizer ## Setting up Atlas to use Simple Authorizer
As detailed in Atlas Authorization Model](http://atlas.apache.org/Atlas-Authorization-Model.html), Apache Atlas supports a pluggable authorization As detailed in Atlas [Authorization Model](#/AuthorizationModel), Apache Atlas supports a pluggable authorization
model. Simple authorizer is the default authorizer implementation included in Apache Atlas. Simple authorizer uses model. Simple authorizer is the default authorizer implementation included in Apache Atlas. Simple authorizer uses
policies defined in a JSON file. This document provides details of steps to configure Apache Atlas to use the simple policies defined in a JSON file. This document provides details of steps to configure Apache Atlas to use the simple
authorizer and details of the JSON file format containing authorization policies. authorizer and details of the JSON file format containing authorization policies.
......
...@@ -97,7 +97,7 @@ The name of the class implementing the authorization interface can be registered ...@@ -97,7 +97,7 @@ The name of the class implementing the authorization interface can be registered
## Simple Authorizer ## Simple Authorizer
Simple authorizer is the default authorizer implementation included in Apache Atlas. For details of setting up Apache Atlas Simple authorizer is the default authorizer implementation included in Apache Atlas. For details of setting up Apache Atlas
to use simple authorizer, please see [Setting up Atlas to use Simple Authorizer](http://atlas.apache.org/Atlas-Authorization-Simple-Authorizer.html) to use simple authorizer, please see [Setting up Atlas to use Simple Authorizer](#/AtlasSimpleAuthorizer)
## Ranger Authorizer ## Ranger Authorizer
...@@ -109,7 +109,7 @@ in application.properties config file: ...@@ -109,7 +109,7 @@ in application.properties config file:
</SyntaxHighlighter> </SyntaxHighlighter>
Apache Ranger Authorizer requires configuration files to be setup, for example to specify Apache Ranger admin server URL, Apache Ranger Authorizer requires configuration files to be setup, for example to specify Apache Ranger admin server URL,
name of the service containing authorization policies, etc. For more details please see, [Setting up Atlas to use Ranger Authorizer](http://atlas.apache.org/Atlas-Authorization-Ranger-Authorizer.html). name of the service containing authorization policies, etc. For more details please see, [Setting up Atlas to use Ranger Authorizer](#/AtlasRangerAuthorizer).
## None authorizer ## None authorizer
......
--- ---
name: Security Details name: Security Details
route: /security route: /Security
menu: Documentation menu: Documentation
submenu: Security submenu: Security
--- ---
...@@ -54,7 +54,7 @@ The properties for configuring service authentication are: ...@@ -54,7 +54,7 @@ The properties for configuring service authentication are:
* `atlas.authentication.keytab` - the path to the keytab file. * `atlas.authentication.keytab` - the path to the keytab file.
* `atlas.authentication.principal` - the principal to use for authenticating to the KDC. The principal is generally of the form "user/host@realm". You may use the '_HOST' token for the hostname and the local hostname will be substituted in by the runtime (e.g. "Atlas/_HOST@EXAMPLE.COM") * `atlas.authentication.principal` - the principal to use for authenticating to the KDC. The principal is generally of the form "user/host@realm". You may use the '_HOST' token for the hostname and the local hostname will be substituted in by the runtime (e.g. "Atlas/_HOST@EXAMPLE.COM")
> Note that when Atlas is configured with HBase as the storage backend in a secure cluster, the graph db (JanusGraph) needs sufficient user permissions to be able to create and access an HBase table. To grant the appropriate permissions see [Graph persistence engine - Hbase](Configuration). > Note that when Atlas is configured with HBase as the storage backend in a secure cluster, the graph db (JanusGraph) needs sufficient user permissions to be able to create and access an HBase table. To grant the appropriate permissions see [Graph persistence engine - Hbase](#/Configuration).
### JAAS configuration ### JAAS configuration
...@@ -233,7 +233,6 @@ Client { ...@@ -233,7 +233,6 @@ Client {
* Copy /etc/solr/conf/solr_jaas.conf to all hosts running Solr. * Copy /etc/solr/conf/solr_jaas.conf to all hosts running Solr.
* Edit solr.in.sh in $SOLR_INSTALL_HOME/bin/ * Edit solr.in.sh in $SOLR_INSTALL_HOME/bin/
...@@ -251,7 +250,6 @@ SOLR_AUTHENTICATION_OPTS=" -DauthenticationPlugin=org.apache.solr.security.Kerbe ...@@ -251,7 +250,6 @@ SOLR_AUTHENTICATION_OPTS=" -DauthenticationPlugin=org.apache.solr.security.Kerbe
</SyntaxHighlighter> </SyntaxHighlighter>
* Copy solr.in.sh to all hosts running Solr. * Copy solr.in.sh to all hosts running Solr.
* Set up Solr to use the Kerberos plugin by uploading the security.json. * Set up Solr to use the Kerberos plugin by uploading the security.json.
...@@ -275,4 +273,4 @@ SOLR_AUTHENTICATION_OPTS=" -DauthenticationPlugin=org.apache.solr.security.Kerbe ...@@ -275,4 +273,4 @@ SOLR_AUTHENTICATION_OPTS=" -DauthenticationPlugin=org.apache.solr.security.Kerbe
curl --negotiate -u : "http://<host>:8983/solr/"`} curl --negotiate -u : "http://<host>:8983/solr/"`}
</SyntaxHighlighter> </SyntaxHighlighter>
* Create collections in Solr corresponding to the indexes that Atlas uses and change the Atlas configuration to point to the Solr instance setup as described in the [Install Steps](InstallationSteps) * Create collections in Solr corresponding to the indexes that Atlas uses and change the Atlas configuration to point to the Solr instance setup as described in the [Install Steps](#/Installation)
--- ---
name: Build Instruction name: Build Instruction
route: /Build-Installation route: /BuildInstallation
menu: Documentation menu: Documentation
submenu: Setup submenu: Setup
--- ---
...@@ -12,7 +12,7 @@ import SyntaxHighlighter from 'react-syntax-highlighter'; ...@@ -12,7 +12,7 @@ import SyntaxHighlighter from 'react-syntax-highlighter';
## Building & Installing Apache Atlas ## Building & Installing Apache Atlas
### Building Apache Atlas ### Building Apache Atlas
Download Apache Atlas 1.0.0 release sources, apache-atlas-1.0.0-sources.tar.gz, from the [downloads](http://atlas.apache.org/Downloads.html) page. Download Apache Atlas 1.0.0 release sources, apache-atlas-1.0.0-sources.tar.gz, from the [downloads](#/Downloads) page.
Then follow the instructions below to to build Apache Atlas. Then follow the instructions below to to build Apache Atlas.
...@@ -38,10 +38,10 @@ mvn clean -DskipTests package -Pdist ...@@ -38,10 +38,10 @@ mvn clean -DskipTests package -Pdist
Above will build Apache Atlas for an environment having functional HBase and Solr instances. Apache Atlas needs to be setup with the following to run in this environment: Above will build Apache Atlas for an environment having functional HBase and Solr instances. Apache Atlas needs to be setup with the following to run in this environment:
* Configure atlas.graph.storage.hostname (see "Graph persistence engine - HBase" in the [Configuration](Configuration) section). * Configure atlas.graph.storage.hostname (see "Graph persistence engine - HBase" in the [Configuration](#/Configuration) section).
* Configure atlas.graph.index.search.solr.zookeeper-url (see "Graph Search Index - Solr" in the [Configuration](Configuration) section). * Configure atlas.graph.index.search.solr.zookeeper-url (see "Graph Search Index - Solr" in the [Configuration](#/Configuration) section).
* Set HBASE_CONF_DIR to point to a valid Apache HBase config directory (see "Graph persistence engine - HBase" in the [Configuration](Configuration) section). * Set HBASE_CONF_DIR to point to a valid Apache HBase config directory (see "Graph persistence engine - HBase" in the [Configuration](#/Configuration) section).
* Create indices in Apache Solr (see "Graph Search Index - Solr" in the [Configuration](Configuration) section). * Create indices in Apache Solr (see "Graph Search Index - Solr" in the [Configuration](#/Configuration) section).
### Packaging Apache Atlas with embedded Apache HBase & Apache Solr ### Packaging Apache Atlas with embedded Apache HBase & Apache Solr
......
...@@ -16,7 +16,7 @@ All configuration in Atlas uses java properties style configuration. The main co ...@@ -16,7 +16,7 @@ All configuration in Atlas uses java properties style configuration. The main co
## Graph Configs ## Graph Configs
### Graph Persistence engine - HBase ### Graph Persistence engine - HBase
Set the following properties to configure [JanusGraph](http://atlas.apache.org/JanusGraph.html) to use HBase as the persistence engine. Please refer to [link](http://docs.janusgraph.org/0.2.0/configuration.html#_hbase_caching) for more details. Set the following properties to configure [JanusGraph](https://janusgraph.org/) to use HBase as the persistence engine. Please refer to [link](http://docs.janusgraph.org/0.2.0/configuration.html#_hbase_caching) for more details.
<SyntaxHighlighter wrapLines={true} language="shell" style={theme.dark}> <SyntaxHighlighter wrapLines={true} language="shell" style={theme.dark}>
{`atlas.graph.storage.backend=hbase {`atlas.graph.storage.backend=hbase
......
...@@ -127,7 +127,7 @@ and change it to look as below ...@@ -127,7 +127,7 @@ and change it to look as below
#### Configuring Apache HBase as the storage backend for the Graph Repository #### Configuring Apache HBase as the storage backend for the Graph Repository
By default, Apache Atlas uses JanusGraph as the graph repository and is the only graph repository implementation available currently. Apache HBase versions currently supported are 1.1.x. For configuring Apache Atlas graph persistence on Apache HBase, please see "Graph persistence engine - HBase" in the [Configuration](Configuration) section for more details. By default, Apache Atlas uses JanusGraph as the graph repository and is the only graph repository implementation available currently. Apache HBase versions currently supported are 1.1.x. For configuring Apache Atlas graph persistence on Apache HBase, please see "Graph persistence engine - HBase" in the [Configuration](#/Configuration) section for more details.
Apache HBase tables used by Apache Atlas can be set using the following configurations: Apache HBase tables used by Apache Atlas can be set using the following configurations:
...@@ -190,7 +190,7 @@ Pre-requisites for running Apache Solr in cloud mode ...@@ -190,7 +190,7 @@ Pre-requisites for running Apache Solr in cloud mode
*Configuring Elasticsearch as the indexing backend for the Graph Repository (Tech Preview)* *Configuring Elasticsearch as the indexing backend for the Graph Repository (Tech Preview)*
By default, Apache Atlas uses [JanusGraph](https://atlas.apache.org/JanusGraph.html) as the graph repository and is the only graph repository implementation available currently. For configuring [JanusGraph](https://atlas.apache.org/JanusGraph.html) to work with Elasticsearch, please follow the instructions below By default, Apache Atlas uses [JanusGraph](https://janusgraph.org/) as the graph repository and is the only graph repository implementation available currently. For configuring [JanusGraph](https://janusgraph.org/) to work with Elasticsearch, please follow the instructions below
* Install an Elasticsearch cluster. The version currently supported is 5.6.4, and can be acquired from: https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.4.tar.gz * Install an Elasticsearch cluster. The version currently supported is 5.6.4, and can be acquired from: https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.4.tar.gz
...@@ -207,18 +207,18 @@ For more information on JanusGraph configuration for elasticsearch, please refer ...@@ -207,18 +207,18 @@ For more information on JanusGraph configuration for elasticsearch, please refer
#### Configuring Kafka Topics #### Configuring Kafka Topics
Apache Atlas uses Apache Kafka to ingest metadata from other components at runtime. This is described in the [[Architecture][Architecture page]] Apache Atlas uses Apache Kafka to ingest metadata from other components at runtime. This is described in the [Architecture](#/Architecture)
in more detail. Depending on the configuration of Apache Kafka, sometimes you might need to setup the topics explicitly before in more detail. Depending on the configuration of Apache Kafka, sometimes you might need to setup the topics explicitly before
using Apache Atlas. To do so, Apache Atlas provides a script =bin/atlas_kafka_setup.py= which can be run from Apache Atlas server. In some using Apache Atlas. To do so, Apache Atlas provides a script =bin/atlas_kafka_setup.py= which can be run from Apache Atlas server. In some
environments, the hooks might start getting used first before Apache Atlas server itself is setup. In such cases, the topics environments, the hooks might start getting used first before Apache Atlas server itself is setup. In such cases, the topics
can be run on the hosts where hooks are installed using a similar script `hook-bin/atlas_kafka_setup_hook.py`. Both these can be run on the hosts where hooks are installed using a similar script `hook-bin/atlas_kafka_setup_hook.py`. Both these
use configuration in `atlas-application.properties` for setting up the topics. Please refer to the [[Configuration][Configuration page]] use configuration in `atlas-application.properties` for setting up the topics. Please refer to the [Configuration](#/Configuration])
for these details. for these details.
#### Setting up Apache Atlas #### Setting up Apache Atlas
There are a few steps that setup dependencies of Apache Atlas. One such example is setting up the JanusGraph schema in the storage backend of choice. In a simple single server setup, these are automatically setup with default configuration when the server first accesses these dependencies. There are a few steps that setup dependencies of Apache Atlas. One such example is setting up the JanusGraph schema in the storage backend of choice. In a simple single server setup, these are automatically setup with default configuration when the server first accesses these dependencies.
However, there are scenarios when we may want to run setup steps explicitly as one time operations. For example, in a multiple server scenario using [High Availability](HighAvailability), it is preferable to run setup steps from one of the server instances the first time, and then start the services. However, there are scenarios when we may want to run setup steps explicitly as one time operations. For example, in a multiple server scenario using [High Availability](#/HighAvailability), it is preferable to run setup steps from one of the server instances the first time, and then start the services.
To run these steps one time, execute the command =bin/atlas_start.py -setup= from a single Apache Atlas server instance. To run these steps one time, execute the command =bin/atlas_start.py -setup= from a single Apache Atlas server instance.
......
...@@ -29,7 +29,7 @@ submenu: Whats New ...@@ -29,7 +29,7 @@ submenu: Whats New
### DSL search ### DSL search
With DSL rewrite and simplification, some older constructs may not work. Here's a list of behavior changes from previous With DSL rewrite and simplification, some older constructs may not work. Here's a list of behavior changes from previous
releases. More DSL related changes can be found [here](Search-Advanced.html). releases. More DSL related changes can be found [here](#/SearchAdvance).
* When filtering or narrowing results using string attribute, the value **MUST** be enclosed in double quotes * When filtering or narrowing results using string attribute, the value **MUST** be enclosed in double quotes
* Table name="Table1" * Table name="Table1"
......
<!--
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
-->
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 976.59 177.38"><defs><style>.cls-1{fill:#cdcccc;}.cls-2,.cls-3{fill:#1fcfb8;}.cls-2{stroke:#1fcfb8;stroke-width:1.18px;}.cls-2,.cls-4,.cls-5{stroke-miterlimit:10;}.cls-4,.cls-5{fill:none;stroke:#cdcccc;}.cls-4{stroke-width:3.55px;}.cls-5{stroke-width:5px;}.cls-6{fill:#009883;}</style></defs><title>Asset 6</title><g id="Layer_2" data-name="Layer 2"><g id="Layer_1-2" data-name="Layer 1"><path class="cls-1" d="M260.81,55.74l23.37,65.13h9.5v10.65H259.34V119.68h10.91l-6.48-18.94h-30.3L227,119.68h9.86v11.84H202.5V120.87h10.42l23.26-65.13h-10V45.08h45V55.74Zm-12,0h-.4L236.17,91.26h24.6Z"/><path class="cls-1" d="M364.14,101.68a34.15,34.15,0,0,1-3.79,15.82A29.78,29.78,0,0,1,350,129.12a26,26,0,0,1-14.5,4.26q-11.75,0-20.53-9.32v24h11.84v9.48H292.5V148.1h10.65V81.79h-9.47V71.13H315V80.8a29.25,29.25,0,0,1,9.57-7.89,24.9,24.9,0,0,1,11.18-2.58,26.37,26.37,0,0,1,14.75,4.22A28.79,28.79,0,0,1,360.55,86,34.74,34.74,0,0,1,364.14,101.68Zm-27.89,20.38a14.71,14.71,0,0,0,8.85-2.85,19,19,0,0,0,6.17-7.77,24.78,24.78,0,0,0,2.23-10.39A21.46,21.46,0,0,0,351,90.86a19.41,19.41,0,0,0-6.74-7.43,17.42,17.42,0,0,0-9.65-2.77,17.84,17.84,0,0,0-10,2.94,20,20,0,0,0-6.9,7.8,23.28,23.28,0,0,0-2.46,10.68q0,9.36,5.91,14.67T336.25,122.06Z"/><path class="cls-1" d="M425.12,92.5v28.37h9.48v10.65H414.47v-7.36q-8.46,8.39-18.84,8.39a19.87,19.87,0,0,1-10-2.57,20.29,20.29,0,0,1-7.28-6.88,18.22,18.22,0,0,1,.29-19.64,19.75,19.75,0,0,1,8.14-6.83,24.36,24.36,0,0,1,10.33-2.39,28,28,0,0,1,16.18,5.14V92.73q0-7.17-3-10.27t-10-3.09a17.17,17.17,0,0,0-8.19,1.78,14.27,14.27,0,0,0-5.46,5.4l-11.55-2.86a25.87,25.87,0,0,1,10.56-10.88,32.9,32.9,0,0,1,15.81-3.59q12.31,0,18,5.6T425.12,92.5ZM397,123q8.67,0,16.27-7.25v-7q-7.71-6-15.41-6a11.92,11.92,0,0,0-8.08,2.94,9.87,9.87,0,0,0-.43,14.51A10.66,10.66,0,0,0,397,123Z"/><path class="cls-1" d="M487.89,75.43v-4.3h10.65v22.5H487.89a13.49,13.49,0,0,0-5.33-9.92,17.57,17.57,0,0,0-10.72-3.42A16.68,16.68,0,0,0,462.49,83a17.94,17.94,0,0,0-6.4,7.35,22.75,22.75,0,0,0-2.26,10.12q0,9.33,5.09,15.23a16.89,16.89,0,0,0,13.44,5.91q12.92,0,18.25-11.7l10,4.64q-8.4,17.74-28.73,17.75a31.11,31.11,0,0,1-16.16-4.19,28.58,28.58,0,0,1-10.92-11.47,34,34,0,0,1-3.85-16.17,32.33,32.33,0,0,1,3.91-15.83,28.48,28.48,0,0,1,10.74-11.11,29.78,29.78,0,0,1,15.24-4A26,26,0,0,1,487.89,75.43Z"/><path class="cls-1" d="M550.4,69.52q16.83,0,16.83,18.1v33.25h10.66v10.65h-22.5V92.24c0-4.3-.63-7.33-1.87-9.08a6.64,6.64,0,0,0-5.79-2.63q-6.62,0-17.21,7.71v32.63h10.66v10.65H508V120.87h10.66V55.74H508V45.08h22.5V78.14Q541.2,69.52,550.4,69.52Z"/><path class="cls-1" d="M647.69,104.29H597.38q1.1,8.63,6.68,13.62a19.86,19.86,0,0,0,13.71,5,25,25,0,0,0,11-2.45,21.5,21.5,0,0,0,8.51-7.54l10.39,4.68a31.54,31.54,0,0,1-12.88,11.57,38.86,38.86,0,0,1-17.33,3.91A35.72,35.72,0,0,1,600.24,129a29.3,29.3,0,0,1-11.74-11.3,34,34,0,0,1,0-32.61,29.78,29.78,0,0,1,11.33-11.33,33.82,33.82,0,0,1,32.41.11,29.36,29.36,0,0,1,11.1,12A42.4,42.4,0,0,1,647.69,104.29ZM615.88,79.67a17,17,0,0,0-11.7,4.48A20.6,20.6,0,0,0,597.73,96H635.3a20.89,20.89,0,0,0-7-11.85A18.77,18.77,0,0,0,615.88,79.67Z"/><path class="cls-2" d="M738,55.74l23.37,65.13h9.5v10.65H736.57V119.68h10.91L741,100.74H710.69l-6.48,18.94h9.86v11.84H679.73V120.87h10.42l23.26-65.13h-10V45.08h45V55.74Zm-12.34,0h-.4L713.05,91.26h24.61Z"/><path class="cls-2" d="M806.43,71.13V81.79H795.78v31.74q0,4.95,1,6.42a4,4,0,0,0,3.53,1.47,18,18,0,0,0,6.1-1.1v10.11a33,33,0,0,1-8.74,1.58q-7.32,0-10.54-3.69t-3.21-12.66V81.79h-8.29V71.13h7.1V62l13-10.79V71.13Z"/><path class="cls-2" d="M847.88,120.87v10.65h-32V120.87h10.66V55.74H815.91V45.08h22.5v75.79Z"/><path class="cls-2" d="M908.28,92.5v28.37h9.47v10.65H897.62v-7.36q-8.46,8.39-18.83,8.39a19.89,19.89,0,0,1-10-2.57,20.26,20.26,0,0,1-7.27-6.88,18.25,18.25,0,0,1,.28-19.64A19.78,19.78,0,0,1,870,96.63a24.24,24.24,0,0,1,10.31-2.39,28,28,0,0,1,16.17,5.14V92.73q0-7.17-3-10.27t-10-3.09a17.18,17.18,0,0,0-8.2,1.78,14.18,14.18,0,0,0-5.45,5.4l-11.56-2.86a25.87,25.87,0,0,1,10.56-10.88,32.9,32.9,0,0,1,15.81-3.59q12.31,0,18,5.6T908.28,92.5ZM880.16,123q8.69,0,16.27-7.25v-7q-7.7-6-15.41-6a11.92,11.92,0,0,0-8.08,2.94,9.87,9.87,0,0,0-.43,14.51A10.66,10.66,0,0,0,880.16,123Z"/><path class="cls-2" d="M962.75,73.32V71.13h10.66V91.26H962.75V87.61a16,16,0,0,0-13.87-7.37,12.87,12.87,0,0,0-7.56,2.06,6.22,6.22,0,0,0-2.93,5.31,5.4,5.4,0,0,0,2.22,4.57q2.24,1.65,10,2.91l7.84,1.54Q976,99.89,976,114.05a16.11,16.11,0,0,1-6.35,13.3q-6.33,5-16.83,5a28,28,0,0,1-8.2-1.23,25,25,0,0,1-6.74-3.06v2.25H927.22V110.21h10.66v2.05a10.58,10.58,0,0,0,4.76,7,16,16,0,0,0,9.11,2.63q5.7,0,9-2a6.63,6.63,0,0,0,3.27-6,4.71,4.71,0,0,0-2.6-4.37q-2.58-1.45-10.43-3l-6-1.31q-9-1.83-13.14-6.28a16,16,0,0,1-4.17-11.37,15.12,15.12,0,0,1,3.07-9.56,19,19,0,0,1,7.87-6,25.41,25.41,0,0,1,9.86-2Q956.94,70,962.75,73.32Z"/><path class="cls-3" d="M76.38,0,0,48.2v80.66l76.38,48.2,76.38-48.2V48.2Zm56.25,118.23-56.25,35.5-56.25-35.5V58.83l56.25-35.5,56.25,35.5Z"/><circle class="cls-4" cx="62.76" cy="122.05" r="4.74"/><circle class="cls-4" cx="107.76" cy="80.6" r="5.33"/><circle class="cls-5" cx="59.21" cy="81.79" r="8.5"/><line class="cls-4" x1="59" y1="90.19" x2="61.96" y2="117.39"/><line class="cls-4" x1="102.43" y1="81.79" x2="68" y2="82.19"/><g id="search"><path class="cls-3" d="M84.24,98.37H81l-1.22-1.22a25.43,25.43,0,0,0,6.5-17.05,26.39,26.39,0,1,0-26.39,26.39A25.43,25.43,0,0,0,76.93,100l1.22,1.22v3.25l20.3,20.3,6.09-6.09Zm-24.36,0A18.27,18.27,0,1,1,78.15,80.1,18.19,18.19,0,0,1,59.88,98.37Z"/></g><polygon class="cls-6" points="152.76 48.1 132.63 58.73 132.63 118.24 75.79 154.11 75.79 177.38 152.76 128.81 152.76 48.1"/></g></g></svg>
\ No newline at end of file
...@@ -68,7 +68,8 @@ Wrapper.defaultProps = WrapperProps; ...@@ -68,7 +68,8 @@ Wrapper.defaultProps = WrapperProps;
const LogoImg = styled("img")` const LogoImg = styled("img")`
padding: 0; padding: 0;
margin: 5px 0; height: 50px;
margin: 2px 0;
`; `;
const LogoText = styled("h1")` const LogoText = styled("h1")`
...@@ -89,7 +90,7 @@ export const Logo = ({ showBg }) => { ...@@ -89,7 +90,7 @@ export const Logo = ({ showBg }) => {
return ( return (
<Wrapper showBg={showBg}> <Wrapper showBg={showBg}>
<Link to={typeof base === "string" ? base : "/"}> <Link to={typeof base === "string" ? base : "/"}>
<LogoImg src={`${baseUrl}/images/atlas-logo-grey.png`} alt={title} /> <LogoImg src={`${baseUrl}/images/atlas_logo.svg`} alt={title} />
</Link> </Link>
</Wrapper> </Wrapper>
); );
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment