Commit 856a0c61 by Richard Ding Committed by David Radley

ATLAS-2003 Add Javadoc format to class summaries

parent b714fe00
......@@ -44,13 +44,14 @@ import java.util.TreeSet;
/**
* InMemoryJAASConfiguration
*
* <p>
* An utility class - which has a static method init to load all JAAS configuration from Application
* properties file (eg: atlas.properties) and set it as part of the default lookup configuration for
* all JAAS configuration lookup.
*
* <p>
* Example settings in jaas-application.properties:
*
* <pre class=code>
* atlas.jaas.KafkaClient.loginModuleName = com.sun.security.auth.module.Krb5LoginModule
* atlas.jaas.KafkaClient.loginModuleControlFlag = required
* atlas.jaas.KafkaClient.option.useKeyTab = true
......@@ -72,9 +73,12 @@ import java.util.TreeSet;
* atlas.jaas.MyClient.1.option.storeKey = true
* atlas.jaas.MyClient.1.option.serviceName = kafka
* atlas.jaas.MyClient.1.option.keyTab = /etc/security/keytabs/kafka_client.keytab
* atlas.jaas.MyClient.1.option.principal = kafka-client-1@EXAMPLE.COM
* atlas.jaas.MyClient.1.option.principal = kafka-client-1@EXAMPLE.COM </pre>
*
* <p>
* This will set the JAAS configuration - equivalent to the jaas.conf file entries:
*
* <pre class=code>
* KafkaClient {
* com.sun.security.auth.module.Krb5LoginModule required
* useKeyTab=true
......@@ -97,23 +101,26 @@ import java.util.TreeSet;
* serviceName=kafka
* keyTab="/etc/security/keytabs/kafka_client.keytab"
* principal="kafka-client-1@EXAMPLE.COM";
* };
*
* Here is the syntax for atlas.properties to add JAAS configuration:
*
* The property name has to begin with 'atlas.jaas.' + clientId (in case of Kafka client,
* it expects the clientId to be KafkaClient).
* The following property must be there to specify the JAAS loginModule name
* 'atlas.jaas.' + clientId + '.loginModuleName'
* The following optional property should be set to specify the loginModuleControlFlag
* 'atlas.jaas.' + clientId + '.loginModuleControlFlag'
* Default value : required , Possible values: required, optional, sufficient, requisite
* Then you can add additional optional parameters as options for the configuration using the following
* }; </pre>
* <p>
* Here is the syntax for atlas.properties to add JAAS configuration:
* <p>
* The property name has to begin with 'atlas.jaas.' + clientId (in case of Kafka client,
* it expects the clientId to be KafkaClient).
* <p>
* The following property must be there to specify the JAAS loginModule name
* <pre> 'atlas.jaas.' + clientId + '.loginModuleName' </pre>
* <p>
* The following optional property should be set to specify the loginModuleControlFlag
* <pre> 'atlas.jaas.' + clientId + '.loginModuleControlFlag'
* Default value : required , Possible values: required, optional, sufficient, requisite </pre>
* <p>
* Then you can add additional optional parameters as options for the configuration using the following
* syntax:
* 'atlas.jaas.' + clientId + '.option.' + <optionName> = <optionValue>
*
* The current setup will lookup JAAS configration from the atlas-application.properties first, if not available,
* it will delegate to the original configuration
* <pre> 'atlas.jaas.' + clientId + '.option.' + <optionName> = <optionValue> </pre>
* <p>
* The current setup will lookup JAAS configration from the atlas-application.properties first,
* if not available, it will delegate to the original configuration
*
*/
......
......@@ -40,18 +40,18 @@ import java.util.Set;
/**
* Abstract implementation of AtlasGraphQuery that is used by both Titan 0.5.4
* and Titan 1.0.0.
*
* <p>
* Represents a graph query as an OrConditions which consists of
* 1 or more AndConditions. The query is executed by converting
* the AndConditions to native GraphQuery instances that can be executed
* directly against Titan. The overall result is obtained by unioning together
* the results from those individual GraphQueries.
*
* <p>
* Here is a pictoral view of what is going on here. Conceptually,
* the query being executed can be though of as the where clause
* in a query
*
*
* <pre>
* where (a =1 and b=2) or (a=2 and b=3)
*
* ||
......@@ -85,7 +85,7 @@ import java.util.Set;
* \/
*
* result
*
* </pre>
*
*
*/
......
......@@ -35,29 +35,34 @@ import static org.codehaus.jackson.annotate.JsonAutoDetect.Visibility.PUBLIC_ONL
/**
* AtlasRelationshipDef is a TypeDef that defines a relationship.
*
* <p>
* As with other typeDefs the AtlasRelationshipDef has a name. Once created the RelationshipDef has a guid.
* The name and the guid are the 2 ways that the RelationshipDef is identified.
*
* <p>
* RelationshipDefs have 2 ends, each of which specify cardinality, an EntityDef type name and name and optionally
* whether the end is a container.
* RelationshipDefs can have AttributeDefs - though only primitive types are allowed.
* RelationshipDefs have a relationshipCategory specifying the UML type of relationship required
* <p>
* RelationshipDefs can have AttributeDefs - though only primitive types are allowed. <br>
* RelationshipDefs have a relationshipCategory specifying the UML type of relationship required <br>
* RelationshipDefs also have a PropogateTag - indicating which way tags could flow over the relationships.
*
* <p>
* The way EntityDefs and RelationshipDefs are intended to be used is that EntityDefs will define AttributeDefs these AttributeDefs
* will not specify an EntityDef type name as their types.
*
* <p>
* RelationshipDefs introduce new atributes to the entity instances. For example
* EntityDef A might have attributes attr1,attr2,attr3
* EntityDef B might have attributes attr4,attr5,attr6
* RelationshipDef AtoB might define 2 ends
* end1: type A, name attr7
* end1: type B, name attr8
* <p>
* EntityDef A might have attributes attr1,attr2,attr3 <br>
* EntityDef B might have attributes attr4,attr5,attr6 <br>
* RelationshipDef AtoB might define 2 ends <br>
*
* When an instance of EntityDef A is created, it will have attributes attr1,attr2,attr3,attr7
* When an instance of EntityDef B is created, it will have attributes attr4,attr5,attr6,attr8
* <pre>
* end1: type A, name attr7
* end2: type B, name attr8 </pre>
*
* <p>
* When an instance of EntityDef A is created, it will have attributes attr1,attr2,attr3,attr7 <br>
* When an instance of EntityDef B is created, it will have attributes attr4,attr5,attr6,attr8
* <p>
* In this way relationshipDefs can be authored separately from entityDefs and can inject relationship attributes into
* the entity instances
*
......@@ -74,8 +79,10 @@ public class AtlasRelationshipDef extends AtlasStructDef implements java.io.Seri
/**
* The Relationship category determines the style of relationship around containment and lifecycle.
* UML terminology is used for the values.
* ASSOCIATION is a relationship with no containment.
* <p>
* ASSOCIATION is a relationship with no containment. <br>
* COMPOSITION and AGGREGATION are containment relationships.
* <p>
* The difference being in the lifecycles of the container and its children. In the COMPOSITION case,
* the children cannot exist without the container. For AGGREGATION, the life cycles
* of the container and children are totally independant.
......@@ -86,17 +93,19 @@ public class AtlasRelationshipDef extends AtlasStructDef implements java.io.Seri
/**
* PropagateTags indicates whether tags should propagate across the relationship instance.
* <p>
* Tags can propagate:
* NONE - not at all
* ONE_TO_TWO - from end 1 to 2
* TWO_TO_ONE - from end 2 to 1
* <p>
* NONE - not at all <br>
* ONE_TO_TWO - from end 1 to 2 <br>
* TWO_TO_ONE - from end 2 to 1 <br>
* BOTH - both ways
*
* <p>
* Care needs to be taken when specifying. The use cases we are aware of where this flag is useful:
*
* - propagating confidentiality classifications from a table to columns - ONE_TO_TWO could be used here
* <p>
* - propagating confidentiality classifications from a table to columns - ONE_TO_TWO could be used here <br>
* - propagating classifications around Glossary synonyms - BOTH could be used here.
*
* <p>
* There is an expectation that further enhancements will allow more granular control of tag propagation and will
* address how to resolve conflicts.
*/
......
......@@ -27,11 +27,13 @@ import java.lang.reflect.Type;
import java.util.List;
/**
* Interface to the Atlas notification framework. Use this interface to create consumers and to send messages of a
* given notification type.
*
* 1. Atlas sends entity notifications
* 2. Hooks send notifications to create/update types/entities. Atlas reads these messages
* Interface to the Atlas notification framework.
* <p>
* Use this interface to create consumers and to send messages of a given notification type.
* <ol>
* <li>Atlas sends entity notifications
* <li>Hooks send notifications to create/update types/entities. Atlas reads these messages
* </ol>
*/
public interface NotificationInterface {
......
......@@ -29,33 +29,30 @@ import org.slf4j.LoggerFactory;
/**
* Optimizer that pulls has expressions out of an 'and' expression.
*
* <p>
* For example:
*
* g.V().and(has('x'),has('y')
*
* <pre class=code>
* g.V().and(has('x'),has('y') </pre>
* <p>
* is optimized to:
*
* g.V().has('x').has('y')
*
* <pre class=code>
* g.V().has('x').has('y') </pre>
* <p>
* There are certain cases where it is not safe to move an expression out
* of the 'and'. For example, in the expression
*
* g.V().and(has('x').out('y'),has('z'))
*
* <pre class=code>
* g.V().and(has('x').out('y'),has('z')) </pre>
* <p>
* has('x').out('y') cannot be moved out of the 'and', since it changes the value of the traverser.
*
* <p>
* At this time, the ExpandAndsOptimizer is not able to handle this scenario, so we don't extract
* that expression. In this case, the result is:
*
* g.V().has('z').and(has('x').out('y'))
*
* <pre class=code>
* g.V().has('z').and(has('x').out('y')) </pre>
* <p>
* The optimizer will call ExpandAndsOptimization recursively on the children, so
* there is no need to recursively update the children here.
*
* @param expr
* @param context
* @return the expressions that should be unioned together to get the query result
*/
public class ExpandAndsOptimization implements GremlinOptimization {
......
......@@ -61,15 +61,17 @@ import java.util.Map;
/**
* HBase based repository for entity audit events
* Table -> 1, ATLAS_ENTITY_EVENTS
* Key -> entity id + timestamp
* Column Family -> 1,dt
* Columns -> action, user, detail
* versions -> 1
*
* Note: The timestamp in the key is assumed to be timestamp in milli seconds. Since the key is entity id + timestamp,
* and only 1 version is kept, there can be just 1 audit event per entity id + timestamp. This is ok for one atlas server.
* But if there are more than one atlas servers, we should use server id in the key
* <p>
* Table -> 1, ATLAS_ENTITY_EVENTS <br>
* Key -> entity id + timestamp <br>
* Column Family -> 1,dt <br>
* Columns -> action, user, detail <br>
* versions -> 1 <br>
* <p>
* Note: The timestamp in the key is assumed to be timestamp in milli seconds. Since the key is
* entity id + timestamp, and only 1 version is kept, there can be just 1 audit event per entity
* id + timestamp. This is ok for one atlas server. But if there are more than one atlas servers,
* we should use server id in the key
*/
@Singleton
@Component
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment