S2Graph is a graph database designed to handle transactional graph processing at scale. Its REST API allows you to store, manage and query relational information using edge and vertex representations in a fully asynchronous and non-blocking manner.
This document covers some basic concepts and terms of S2Graph as well as help you get a feel for the S2Graph API.
Building from the source
To build S2Graph from the source, install the JDK 8 and SBT, and run the following command in the project root:
sbt package
This will create a distribution of S2Graph that is ready to be deployed.
One can find distribution on target/apache-s2graph-$version-incubating-bin.
Quick Start
Once extracted the downloaded binary release of S2Graph or built from the source as described above, the following files and directories should be found in the directory.
DISCLAIMER
LICENCE # the Apache License 2.0
NOTICE
bin # scripts to manage the lifecycle of S2Graph
conf # configuration files
lib # contains the binary
logs # application logs
var # application data
This directory layout contains all binary and scripts required to launch S2Graph. The directories logs and var may not be present initially, and are created once S2Graph is launched.
The following will launch S2Graph, using HBase in the standalone mode for data storage and H2 as the metadata storage.
s2core: The core library, containing the data abstractions for graph entities, storage adapters and utilities.
s2http: The REST server built with [akka-http] the write and query API.
4. loader: A collection of Spark jobs for bulk loading streaming data into S2Graph.5. spark: Spark utilities for loader and s2counter_loader.
s2jobs: A collection of Spark jobs to support OLAP on S2Graph.
s2counter_core: The core library providing data structures and logics for s2counter_loader.
s2counter_loader: Spark streaming jobs that consume Kafka WAL logs and calculate various top-K results on-the-fly.
s2graph_gremlin: Gremlin plugin for tinkerpop users.
The first four projects are for OLTP-style workloads, currently the main target of S2Graph.
The remaining projects are for OLAP-style workloads, especially for integrating S2Graph with other projects, such as Apache Spark and Kafka.
The loader and spark projects are deprecated by s2jobs since version 0.2.0.
Note that, the OLAP-style workloads are under development and we are planning to provide documentations in the upcoming releases.
Your First Graph
Once the S2Graph server has been set up, you can now start to send HTTP queries to the server to create a graph and pour some data in it. This tutorial goes over a simple toy problem to get a sense of how S2Graph’s API looks like. bin/example.sh contains the example code below.
The toy problem is to create a timeline feature for a simple social media, like a simplified version of Facebook’s timeline. Using simple S2Graph queries it is possible to keep track of each user’s friends and their posts.
First, we need a name for the new service.
The following POST query will create a service named “KakaoFavorites”.
Now that the label friends is ready, we can store the friendship data. Entries of a label are called edges, and you can add edges with edges/insert API:
So far, we have designed a label schema for the labels friends and post, and stored some edges to them.+
This should be enough for creating the timeline feature! The following two-step query will return the URLs for Elmo’s timeline, which are the posts of Elmo’s friends:
The example above is by no means a full blown social network timeline, but it gives you an idea of how to represent, store and query graph data with S2Graph.+
TinkerPop Support
Since version 0.2.0-incubating, S2Graph integrate natively with Apache TinkerPop 3.2.5.
S2Graph passes Apache TinkerPop‘s StructureStandardSuite and ProcessStandardSuite test suites.
conf = new BaseConfiguration()
graph = S2Graph.open(conf)
// init system default schema
S2GraphFactory.initDefaultSchema(graph)
// init extra schema for tp3 modern graph.
S2GraphFactory.initModernSchema(graph)
// load modern graph into current graph instance.
S2GraphFactory.generateModern(graph)
// traversal
t = graph.traversal()
// show all vertices in this graph.
t.V()
// show all edges in this graph.
t.E()
// add two vertices.
shon = graph.addVertex(T.id, 10, T.label, "person", "name", "shon", "age", 35)
s2graph = graph.addVertex(T.id, 11, T.label, "software", "name", "s2graph", "lang", "scala")
// add one edge between two vertices.
created = shon.addEdge("created", s2graph, "_timestamp", 10, "weight", 0.1)
// check if new edge is available through traversal
t.V().has("name", "shon").out()
// shutdown
graph.close()
Note that simple version used default schema for Service, Column, Label for compatibility.
Please checkout advanced example below to understand what data model is available on S2Graph.
A convenient logical grouping of related entities
Similar to the database abstraction that most relational databases support.
Column: belongs to a service.
A set of homogeneous vertices such as users, news articles or tags.
Every vertex has a user-provided unique ID that allows the efficient lookup.
A service typically contains multiple columns.
Label: schema for edge
A set of homogeneous edges such as friendships, views, or clicks.
Relation between two columns as well as a recursive association within one column.
The two columns connected with a label may not necessarily be in the same service, allowing us to store and query data that spans over multiple services.
Instead of convert user provided Id into internal unique numeric Id, S2Graph simply composite service and column metadata with user provided Id to guarantee global unique Id.
Following is simple example to exploit these data model in s2graph.
// init graph
graph = S2Graph.open(new BaseConfiguration())
// 0. import necessary methods for schema management.
import static org.apache.s2graph.core.Management.*
// 1. initialize dbsession for management which store schema into RDBMS.
session = graph.dbSession()
// 2. properties for new service "s2graph".
serviceName = "s2graph"
cluster = "localhost"
hTableName = "s2graph"
preSplitSize = 0
hTableTTL = -1
compressionAlgorithm = "gz"
// 3. actual creation of s2graph service.
// details can be found on https://steamshon.gitbooks.io/s2graph-book/content/create_a_service.html
service = graph.management.createService(serviceName, cluster, hTableName, preSplitSize, hTableTTL, compressionAlgorithm)
// 4. properties for user vertex schema belongs to s2graph service.
columnName = "user"
columnType = "integer"
// each property consist of (name: String, defaultValue: String, dataType: String)
// defailts can be found on https://steamshon.gitbooks.io/s2graph-book/content/create_a_servicecolumn.html
props = [newProp("name", "-", "string"), newProp("age", "-1", "integer")]
schemaVersion = "v3"
user = graph.management.createServiceColumn(serviceName, columnName, columnType, props, schemaVersion)
// 2.1 (optional) global vertex index.
graph.management.buildGlobalVertexIndex("global_vertex_index", ["name", "age"])
// 3. create VertexId
// create S2Graph's VertexId class.
v1Id = graph.newVertexId(serviceName, columnName, 20)
v2Id = graph.newVertexId(serviceName, columnName, 30)
shon = graph.addVertex(T.id, v1Id, "name", "shon", "age", 35)
dun = graph.addVertex(T.id, v2Id, "name", "dun", "age", 36)
// 4. friends label
labelName = "friend_"
srcColumn = user
tgtColumn = user
isDirected = true
indices = []
props = [newProp("since", "-", "string")]
consistencyLevel = "strong"
hTableName = "s2graph"
hTableTTL = -1
options = null
friend = graph.management.createLabel(labelName, srcColumn, tgtColumn,
isDirected, serviceName, indices, props, consistencyLevel,
hTableName, hTableTTL, schemaVersion, compressionAlgorithm, options)
shon.addEdge(labelName, dun, "since", "2017-01-01")
t = graph.traversal()
println "All Edges"
println t.E().toList()
println "All Vertices"
println t.V().toList()
println "Specific Edge"
println t.V().has("name", "shon").out().toList()
once following global vertex index is created, then S2GraphStep translate above traversal into lucene query, then get list of vertexId/edgeId which switch full scan to points lookup.
S2Graph
S2Graph is a graph database designed to handle transactional graph processing at scale. Its REST API allows you to store, manage and query relational information using edge and vertex representations in a fully asynchronous and non-blocking manner.
S2Graph is a implementation of Apache TinkerPop on Apache HBASE.
This document covers some basic concepts and terms of S2Graph as well as help you get a feel for the S2Graph API.
Building from the source
To build S2Graph from the source, install the JDK 8 and SBT, and run the following command in the project root:
sbt package This will create a distribution of S2Graph that is ready to be deployed.
One can find distribution on
target/apache-s2graph-$version-incubating-bin.Quick Start
Once extracted the downloaded binary release of S2Graph or built from the source as described above, the following files and directories should be found in the directory.
This directory layout contains all binary and scripts required to launch S2Graph. The directories
logsandvarmay not be present initially, and are created once S2Graph is launched.The following will launch S2Graph, using HBase in the standalone mode for data storage and H2 as the metadata storage.
sh bin/start-s2graph.sh To connect to a remote HBase cluster or use MySQL as the metastore, refer to the instructions in
conf/application.conf. S2Graph is tested on HBase versions 0.98, 1.0, 1.1, and 1.2 (https://hub.docker.com/r/harisekhon/hbase/tags/).Project Layout
Here is what you can find in each subproject.
s2core: The core library, containing the data abstractions for graph entities, storage adapters and utilities.s2http: The REST server built with [akka-http] the write and query API.4.loader: A collection of Spark jobs for bulk loading streaming data into S2Graph.5.spark: Spark utilities forloaderands2counter_loader.s2jobs: A collection of Spark jobs to support OLAP on S2Graph.s2counter_core: The core library providing data structures and logics fors2counter_loader.s2counter_loader: Spark streaming jobs that consume Kafka WAL logs and calculate various top-K results on-the-fly.s2graph_gremlin: Gremlin plugin for tinkerpop users.The first four projects are for OLTP-style workloads, currently the main target of S2Graph. The remaining projects are for OLAP-style workloads, especially for integrating S2Graph with other projects, such as Apache Spark and Kafka. The
projects are deprecated byloaderandsparks2jobssince version 0.2.0.Note that, the OLAP-style workloads are under development and we are planning to provide documentations in the upcoming releases.
Your First Graph
Once the S2Graph server has been set up, you can now start to send HTTP queries to the server to create a graph and pour some data in it. This tutorial goes over a simple toy problem to get a sense of how S2Graph’s API looks like.
bin/example.shcontains the example code below.The toy problem is to create a timeline feature for a simple social media, like a simplified version of Facebook’s timeline
. Using simple S2Graph queries it is possible to keep track of each user’s friends and their posts.
The following POST query will create a service named “KakaoFavorites”.
To make sure the service is created correctly, check out the following.
In S2Graph, relationships are organized as labels. Create a label called
friendsusing the followingcreateLabelAPI call:Check if the label has been created correctly:+
Now that the label
friendsis ready, we can store the friendship data. Entries of a label are called edges, and you can add edges withedges/insertAPI:Query friends of Elmo with
getEdgesAPI:Now query friends of Cookie Monster:
We will need a new label
postfor this data:Now, insert some posts of the users:
Query posts of Big Bird:
friendsandpost, and stored some edges to them.+This should be enough for creating the timeline feature! The following two-step query will return the URLs for Elmo’s timeline, which are the posts of Elmo’s friends:
Also try Cookie Monster’s timeline:
The example above is by no means a full blown social network timeline, but it gives you an idea of how to represent, store and query graph data with S2Graph.+
TinkerPop Support
Since version 0.2.0-incubating, S2Graph integrate natively with
Apache TinkerPop 3.2.5. S2Graph passesApache TinkerPop‘sStructureStandardSuiteandProcessStandardSuitetest suites.Graph Features not implemented.
Vertex Features not implemented.
Edge Features not implemented.
Vertex property features not implemented.
Edge property feature not implemented.
Getting Started
Maven coordinates
Start
S2Graph is a singleton that can be shared among multiple threads. You instantiate S2Graph using the standard TinkerPop static constructors.
Some important properties for configuration.
HBase for data storage.
RDBMS for meta storage.
Gremlin Console
1. install plugin
On gremlin console, it is possible to install s2graph as follow.
Example run.
Once
s2graph-gremlinplugin is acvive, then following example will generate tinkerpop’s modern graph in s2graph.Taken from TinkerPop
tp3 modern graph(simple).
Note that simple version used default schema for
Service,Column,Labelfor compatibility. Please checkout advanced example below to understand what data model is available on S2Graph.tp3 modern graph(advanced).
It is possible to separate multiple namespaces into logical spaces. S2Graph achieve this by following data model. details can be found on https://steamshon.gitbooks.io/s2graph-book/content/the_data_model.html.
A convenient logical grouping of related entities Similar to the database abstraction that most relational databases support.
A set of homogeneous vertices such as users, news articles or tags. Every vertex has a user-provided unique ID that allows the efficient lookup. A service typically contains multiple columns.
A set of homogeneous edges such as friendships, views, or clicks. Relation between two columns as well as a recursive association within one column. The two columns connected with a label may not necessarily be in the same service, allowing us to store and query data that spans over multiple services.
Instead of convert user provided Id into internal unique numeric Id, S2Graph simply composite service and column metadata with user provided Id to guarantee global unique Id.
Following is simple example to exploit these data model in s2graph.
Architecture
physical data storage is closed related to data model(https://steamshon.gitbooks.io/s2graph-book/content/the_data_model.html).
in HBase storage, Vertex is stored in
vcolumn family, and Edge is stored inecolumn family.each
Service/Labelcan have it’s own dedicated HBase Table.How Edge/Vertex is actually stored in
KeyValuein HBase is described in details.Indexes
will be updated.
Cache
will be updated.
Gremlin
S2Graph has full support for gremlin. However gremlin’s fine grained graphy nature results in very high latency
Provider suppose to provide
ProviderOptimizationto improve latency of traversal, and followings are currently available optimizations.1.
S2GraphStephasstep into lucene query and find out vertexId/edgeId can be found from index provider, lucene.for examples, following traversal need full scan on storage if there is no index provider.
once following global vertex index is created, then
S2GraphSteptranslate above traversal into lucene query, then get list of vertexId/edgeId which switch full scan to points lookup.The Official Website
S2Graph API document
Mailing Lists