NanoSparqlServer

From Blazegraph
Jump to: navigation, search

NanoSparqlServer provides a light weight REST API for RDF. It is implemented using the Servlet API. You can run NanoSparqlServer from the command line and or embedded within your application using the bundled jetty dependencies. You can also deploy the REST API Servlets into a standard servlet engine.

Contents

Deploying NanoSparqlServer

You DO NOT need to deploy the Sesame WAR to run NanoSparqlServer. NanoSparqlServer can be run from the command line (using jetty) embedded (using jetty) or deployed in a servlet container such as Tomcat. By far the easiest way to deploy it is in a servlet container.

Downloading the Executable Jar

Download [the latest bigdata-bundled.jar file]. Alternatively you can build the bigdata-bundled.jar file:

java -server -Xmx4g -jar bigdata-bundled.jar

You may also check out the code and use the ant task to generate the jar.

ant clean executable-jar

This generates ant-build/bigdata-bundled.jar.

java -server -Xmx4g -jar ant-build/bigdata-_X_Y_Z_YYYYMMDD-bundled.jar

Once it's started, the default is http://localhost:9999/bigdata/.

java -server -Xmx4g -jar bigdata-_X_Y_Z_YYYYMMDD-bundled.jar (i.e. bigdata-1.5.1-20150320.jar)

...

serviceURL: http://127.0.0.1:9999


Welcome to Blazegraph(tm) by SYSTAP.


Go to http://localhost:9999/bigdata/ to get started.

You can specify the properties file used with the -Dbigdata.propertyFile=<path>.

java -server -Xmx4g -Dbigdata.propertyFile=/etc/blazegraph/RWStore.properties -jar bigdata-bundled.jar

Command line (using jetty)

To run the server from the command line (using jetty), you first need to know how your classpath should be set. The bundleJar target of the top-level build.xml file can be invoked to generate a bundle-<version>.jar file to simplify classpath definition. Look in the bigdata-perf directories for examples of ant scripts which do this.

Once you know how to set your classpath you can run the NanoSparqlServer from the command line by executing the class com.bigdata.rdf.sail.webapp.NanoSparqlServer providing the connection port, the namespace and a property file:

java -cp ... -server com.bigdata.rdf.sail.webapp.NanoSparqlServer <port> <namespace> <propertiesFile>

The ... should be your classpath.

The port is just whatever http port you want to run on.

The namespace is the namespace of the triple or quads store instance within bigdata to which you want to connect. If no such namespace exists, a default kb instance is created.

The propertiesFile is where you configure bigdata. You can start with RWStore.properties and then edit it to match your requirements. There are a variety of example property files in samples for quads, triples, inference, provenance, and other interesting variations.

Embedded (using jetty)

The following code example starts a server from code - see NSSEmbeddedExample.java for the full example and running code.

            server = NanoSparqlServer.newInstance(port, indexManager,
                    initParams);

            server.start();

            final int actualPort = server.getConnectors()[0]
                    .getLocalPort();

            String hostAddr = NicUtil.getIpAddress("default.nic",
                    "default", true/* loopbackOk */);

            if (hostAddr == null) {

                hostAddr = "localhost";

            }

            final String serviceURL = new URL("http", hostAddr, actualPort, ""/* file */)
                    .toExternalForm();
            
            System.out.println("serviceURL: " + serviceURL);

            // Block and wait. The NSS is running.
            server.join();

Servlet Container (Tomcat, jetty, etc)

Download WAR

Download, install, configure a servlet container. See the documentation for your server container as they are all different.

Download [the latest bigdata.war file]. Alternatively you can build the bigdata.war file:

ant clean bundleJar war

This generates ant-build/bigdata.war.

Drop the WAR into the webapps directory of your servlet container and unpack it.

Build jetty deployer

Alternatively you can build the deployer for jetty. This approach may be used for both HA and non-HA deployments. It produces a directory structure that is suitable for installation as a service. The web.xml, jetty.xml, log4j.properties and related files are all located within the generated directory structure. See HAJournalServer for details on the structure and configuration of the generated distribution.

ant stage

Configuration

Note: It is strongly advised that you unpack the WAR before you start it and edit the RWStore.properties and/or the web.xml deployment descriptor. The web.xml file controls the location of the RWStore.properties file. The RWStore.properties file controls the behavior of the bigdata database instance, the location of the database instance on your disk, and the configuration for the default triple and/or quad store instance that will be created when the webapp starts for the first time. Take a moment to review and edit web.xml and RWStore.properties before you go any further. See GettingStarted if you need help to setup the KB for triples versus quads, enable inference, etc.

Note: As of r6797 and releases after 1.2.2, you can specify the following property to override the location of the bigdata property file:

-Dcom.bigdata.rdf.sail.webapp.ConfigParams.propertyFile=FILE

where FILE is the fully qualified path of the bigdata property file (e.g., RWStore.properties).

You should specify JAVA_OPTS with at least the following properties. The guidelines for the maximum java heap size are no more than 1/2 of the available RAM. Heap sizes of 2G to 8G are good recommended to avoid long GC pauses. Larger heaps are possible with the G1 collector (in Java 7).

export JAVA_OPTS="-server -Xmx2g"

Logging

A log4j.properties file is deployed to the WEB-INF/classes directory in the WAR. This will be located automatically during startup. Releases through 1.0.2 will log a warning indicating that the log4j configuration could not be located, but the log4j.properties file is still in effect.

By default, the log4j.properties file will log on the ConsoleAppender. You can edit the log4j.properties file to specify a different appender, e.g., a FileAppender and log file.

You can override the log4j.properties file with your own version by passing a Java property at the command line.

-Dlog4j.configuration={path to file}

Common Startup Problems

The default web.xml and RWStore.properties files use path names which are relative to the directory in which you start the servlet engine. To use the defaults for those files with tomcat you must start tomcat from the 'bin' directory. For example:

cd bin
./startup.sh

If you have any problems getting the bigdata WAR to start, please consult the servlet log files for detailed information which can help you to localize a configuration error. For Tomcat6 on Ubuntu 10.04 the servlet log is called /var/lib/tomcat6/logs/catalina.out . It may have another name or location in another environment. If you see a permissions error on attempting to open file rules.log then your servlet engine may have been started from the wrong directory.

If you cannot start Tomcat from the 'bin' directory as described above, then you can instead change bigdata's file paths from relative to absolute:

  1. In webapps/bigdata/WEB-INF/RWStore.properties change this line:
    com.bigdata.journal.AbstractJournal.file=bigdata.jnl
  2. In webapps/bigdata/WEB-INF/classes/log4j.properties change these three lines:
    1. log4j.appender.ruleLog.File=rules.log
    2. log4j.appender.queryLog.File=queryLog.csv
    3. log4j.appender.queryRunStateLog.File=queryRunState.log
  3. In webapps/bigdata/WEB-INF/web.xml change this line:
    <param-value>../bigdata/RWStore.properties</param-value>

Active URLs

When deployed normally, the following URLs should be active (make sure you use the correct port# for your servlet engine):

  1. http://localhost:8080/bigdata - help page / console. (This is also called the serviceURL.)
  2. http://localhost:8080/bigdata/sparql - REST API (This is also called the SparqlEndpoint and uses the default namespace.)
  3. http://localhost:8080/bigdata/status - Status page
  4. http://localhost:8080/bigdata/counters - Performance counters

For example, you can select everything in the database using (this will be an empty result set for a new quad store):

http://localhost:8080/bigdata/sparql?query=select * where { ?s ?p ?o } limit 1

URL encoded this would be:

http://localhost:8080/bigdata/sparql?query=select%20*%20where%20{%20?s%20?p%20?o%20}%20limit%201

web.xml

The following context-param entries are defined. Also see HAJournalServer and HALoadBalancer.

name default definition since
propertyFile WEB-INF/RWStore.properties The property file (for a standalone database instance) or the jini configuration file (for a federation). The file MUST end with either ".properties" or ".config". This path is relative to the directory from which you start the servlet container so you may have to edit it for your installation, e.g., by specifying an absolution path. Also, it is a good idea to review the RWStore.properties file as well and specify the location of the database file on which it will persist your data. Note: You MAY override this parameter using "-Dcom.bigdata.rdf.sail.webapp.ConfigParams.propertyFile=FILE" when starting the servlet container.
namespace kb The default bigdata namespace of for the triple or quad store instance to be exposed.
create true When true a new triple or quads store instance will be created if none is found at that namespace.
queryThreadPoolSize 16 The size of the thread pool used to service SPARQL queries -OR- ZERO (0) for an unbounded thread pool (which is not recommended).
readOnly false When true, the REST API will not permit mutation operations.
queryTimeout 0 When non-zero, the timeout for queries (milliseconds).
warmupTimeout 0 When non-zero, the timeout for the warmup period (milliseconds). The warmup period pulls in the non-leaf index pages and reduces the impact of sudden heavy query workloads on the disk and on GC. The end points are not available during the warmup period. 1.5.2
warmupNamespaceList A list of the namespaces to be exercised during the warmup period (optional). When the list is empty, all namespaces will be warmed up. 1.5.2
warmupThreadPoolSize 20 The number of parallel threads to use for the warmup period. At most one thread will be used per index. 1.5.2

Highly Available Replication Cluster (HA)

See HAJournalServer for information on deploying the HA Replication Cluster.

Scale-out (cluster / federation)

The NanoSparqlServer will automatically create a KB instance for given namespace if none exists. However, the default KB configuration is not appropriate for scale-out. In order to create a KB instance which is appropriate for scale-out you need to override the properties object which will be seen by the NanoSparqlServer (actually, by the BigdataRDFServletContext). You can do this by editing "com.bigdata.service.jini.JiniClient" component block in the configuration file. The line that you want to change is:

old:
    // properties = new NV[] {};
new:
   properties =	lubm.properties;

This will direct the NanoSparqlServer to use the configuration for the KB instance described the the "lubm" component in the file, which gives a KB configuration which is appropriate for the LUBM benchmark. You can then modify the "lubm" component to reflect your use case, e.g., triples versus quads, etc.

To setup for quads, change the following lines in the "lubm" configuration block:


old: 
    static private namespace = "U"+univNum+"";
new:
    static private namespace = "PUT-YOUR_NAMESPACE_HERE"; // Note: This MUST be the same value you will specify to the NanoSparqlServer.

old:
	//new NV(BigdataSail.Options.AXIOMS_CLASS, "com.bigdata.rdf.axioms.RdfsAxioms"),
new:
         new NV(BigdataSail.Options.AXIOMS_CLASS,"com.bigdata.rdf.axioms.NoAxioms"),

new:
	new NV(BigdataSail.Options.QUADS_MODE,"true"),

old:
        new NV(BigdataSail.Options.FORWARD_CHAIN_OWL_INVERSE_OF, "true"),
        new NV(BigdataSail.Options.FORWARD_CHAIN_OWL_TRANSITIVE_PROPERTY, "true"),
new:
//        new NV(BigdataSail.Options.FORWARD_CHAIN_OWL_INVERSE_OF, "true"),
//        new NV(BigdataSail.Options.FORWARD_CHAIN_OWL_TRANSITIVE_PROPERTY, "true"),

Note that you have to specify the namespace both in the configuration file and on the command line and to the NanoSparqlServer since the configuration file is parametrized to override various indices based on the namespace.

Start the NanoSparqlServer using nanoSparqlServer.sh. You need to specify the port and the default KB namespace on the command line:

nanoSparqlServer.sh port namespace

The NanoSparqlServer will echo the serviceURL to the console. The actual URL depends on your installation, but it will be something like this:

serviceURL: http://192.168.1.10:8090/bigdata

The "serviceURL" is actually the URI of the NanoSparqlServer web application. You can interact directly with the web application. If you want to use the SPARQL end point, you need to append "/sparql" to that URL. For example:

serviceURL: http://192.168.1.10:8090/bigdata/sparql

Note: By default, the nanoSparqlServer.sh script will assert a read lock for the lastCommitTime on the federation. This removes the need to obtain a transaction per query on a cluster. See the script file for more information.


Issues:

  1. log4j configuration complaints.
  2. reload of the webapp causes complaints.
  3. refer people to JVM settings for decent performance.

REST API

SPARQL End Point

The NanoSparqlServer will respond at the following URL

http://localhost:port/bigdata/sparql

A request to the following URL will result in a permanent redirect (301) to the URL given above:

http://localhost:port/

The baseURI for the NanoSparqlServer is the effective service end point URL.

MIME Types

In general, requests may use any of the known MIME types. Likewise, you can CONNEG for any of these MIME types. However, CONNEG may not be very robust. Therefore, when seeking a specific MIME type for a response, it is best to specify an Accept header which specifies just the desired MIME type.

RDF data

These data are based on the org.openrdf.rio.RDFFormat declarations. The set of understood formats is extensible Additional declarations MAY be registered with the openrdf platform and associated with parsers and writers for that RDFFormat. The recommended charset, file name extension, etc. are always as declared by the IANA MIME type registration. Note that a potential for confusion exists with the ".xml" MIME Type and its use with this API is not recommended. RDR means that both RDF* and SPARQL* are supported for a given data interchange syntax. See Reification_Done_Right for more details.

MIME Type File extension Charset Name URL RDR? Comments
application/rdf+xml .rdf, .rdfs, .owl, .xml UTF-8 RDF/XML http://www.w3.org/TR/REC-rdf-syntax/
text/plain .nt US-ASCII N-Triples http://www.w3.org/TR/rdf-testcases/#ntriples N-Triples defines an escape encoding for non-ASCII characters.
application/x-n-triples-RDR .ntx US-ASCII N-Triples-RDR http://www.w3.org/TR/rdf-testcases/#ntriples Yes This is a bigdata specific extension of N-TRIPLES that supports RDR.
application/x-turtle .ttl UTF-8 Turtle http://www.w3.org/TeamSubmission/turtle/
application/x-turtle-RDR .ttlx UTF-8 Turtle-RDR http://www.bigdata.com/whitepapers/reifSPARQL.pdf Yes This is a bigdata specific extension that supports RDR.
text/rdf+n3 .n3 UTF-8 N3 http://www.w3.org/TeamSubmission/n3/
application/trix .trix UTF-8 TriX http://www.hpl.hp.com/techreports/2003/HPL-2003-268.html
application/x-trig .trig UTF-8 TRIG http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec
text/x-nquads .nq US-ASCII NQUADS http://sw.deri.org/2008/07/n-quads/ Parser only before bigdata 1.4.0.
application/sparql-results+json, application/json .srj, .json UTF-8 Bigdata JSON interchange for RDF/RDF* N/A Yes bigdata json interchange supports RDF RDR data and also SPARQL result sets.

SPARQL Result Sets

MIME Type Name URL RDR? Comments
application/sparql-results+xml SPARQL Query Results XML Format http://www.w3.org/TR/rdf-sparql-XMLres/
application/sparql-results+json, application/json SPARQL Query Results JSON Format http://www.w3.org/TR/rdf-sparql-json-res/ Yes The bigdata extension allows the interchange of RDR data in result sets as well.
application/x-binary-rdf-results-table Binary Query Results Format http://www.openrdf.org/doc/sesame2/api/org/openrdf/query/resultio/binary/BinaryQueryResultConstants.html This is a format defined by the openrdf platform.
text/tab-separated-values Tab Separated Values (TSV) http://www.w3.org/TR/sparql11-results-csv-tsv/
text/csv Comma Separated Values (CSV) http://www.w3.org/TR/sparql11-results-csv-tsv/

Property set data

The Multi-Tenancy API interchanges property set data. The MIME types understood by the API are:

MIME Type File extension Charset
application/xml .xml UTF-8
text/plain .properties UTF-8

Mutation Result

Operations which cause a mutation will report an XML document having the general structure:

<data modified="5" milliseconds="112"/>

Where modified is the mutation count.

Where milliseconds is the elapsed time for the operation.

API Atomicity

Queries use snapshot isolation.

Mutation operations are ACID against a standalone database and shard-wise ACID against a bigdata federation.

API Parameters

Some operations accept parameters that MUST be URIs. Others accept parameters that MAY be either Literals or URIs. Where either a literal or a URI value can be used, as in the s, p, o, and c parameters for DELETE or ESTCARD, then angle brackets (for a URI) or quoting (for a Literal) MUST be used. Otherwise, angle brackets and quoting MUST NOT be used.

URI Only Value Parameters

If an operation accepts a parameter that MUST be a URI, then the URI is given without the surrounding angle brackets < >. This is true for all SPARQL and SPARQL 1.1 query and update URI parameters.

For example, the following method inserts the data from tbox.ttl into the context named <http://example.org/tbox>. The context-uri MUST be a URI. The angle brackets are NOT used.

curl -D- -H 'Content-Type: text/turtle' --upload-file tbox.ttl -X POST 'http://localhost:80/bigdata/sparql?context-uri=http://example.org/tbox'

URI or Literal Valued Parameters

If an operation accepts parameters that MAY be either a URI or a Literal, then the value MUST be specified using angle brackets or quotes as appropriate. For these parameters, the quotation marks and angle brackets are necessary to distinguish between values that are Literals and values that are URIs. Without this, the API could not distinguish between a Literal whose text was a well-formed URI and a URI.

Examples of properly formed URIs and Literals include:

<http://www.bigdata.com/>
"abc"
"abc"@en
"3"^^xsd:int

A number of the bigdata REST API methods can operate on Literals or URIs. The following example will delete all triples in the named graph <http://example.org/graph1>. The angle brackets MUST be used since the DELETE methods allow you to specify the s (subject), p (predicate) o (object), or c (context) for the triple or quad pattern to be deleted. Since the pattern may include both URIs and Literals, Literals MUST be quoted and URIs MUST use angle brackets:

curl -D- -X DELETE 'http://localhost:80/bigdata/sparql?c=<http://example.org/graph1>'

Some REST API methods (e.g., DELETE_BY_ACCESS_PATH) allow multiple bindings for the context position. Such bindings are distinct URL query parameters. For example, the following removes all statements in the named graph <http://example.org/graph1> and the named graph <http://example.org/graph2>.

curl -D- -X DELETE 'http://localhost:80/bigdata/sparql?c=<http://example.org/graph1>&c=<http://example.org/graph2>'

QUERY

GET or POST

GET Request-URI ?query=...

-OR-

POST Request-URI ?query=...

The response body is the result of the query.

The following query parameters are understood:

parameter definition
timestamp A timestamp corresponding to a commit time against which the query will read.
explain The query will be run, but the response will be an HTML document containing an "explanation" of the query. The response currently includes the original SPARQL query, the operator tree obtained by parsing that query, and detailed metrics from the evaluation of the query. This information may be used to examine opportunities for query optimization.
analytic This enables the AnalyticQuery mode.
default-graph-uri Specify zero or more graphs whose RDF merge is the default graph for this query (protocol option with the same semantics as FROM).
named-graph-uri Specify zero or more named graphs for this query (protocol option with the same semantics as FROM NAMED).
format Available in versions after 1.4.0. This is an optional query parameter that allows you to set the result type other than via the Accept Headers. Valid values are json, xml, application/sparql-results+json, and application/sparql-results+xml. json and xml are simple short cuts for the full mime type specification. Setting this parameter will override any Accept Header that is present.

The following HTTP headers are understood:

parameter definition
X-BIGDATA-MAX-QUERY-MILLIS The maximum time in milliseconds for the query to execute.

For example, the following simple query will return one statement from the default KB instance:

curl -X POST http://localhost:8080/bigdata/sparql --data-urlencode 'query=SELECT * { ?s ?p ?o } LIMIT 1' -H 'Accept:application/rdf+xml'

If you want the result set in JSON using Accept headers, use:

curl -X POST http://localhost:8080/bigdata/sparql --data-urlencode 'query=SELECT * { ?s ?p ?o } LIMIT 1' -H 'Accept:application/sparql-results+json'

If you want the result set in JSON using the format query parameter, use:

curl -X POST http://localhost:8080/bigdata/sparql --data-urlencode 'query=SELECT * { ?s ?p ?o } LIMIT 1' --data-urlencode 'format=json'

If cached results are Ok, then you can use an HTTP GET instead:

curl -G http://localhost:8080/bigdata/sparql --data-urlencode 'query=SELECT * { ?s ?p ?o } LIMIT 1' -H 'Accept:application/sparql-results+json'

FAST RANGE COUNTS

Bigdata uses fast range counts internally for its query optimizer. Fast range counts on an access path are computed with two key probes against appropriate index. Fast range counts are appropriate for federated query engines where they provide more information than an "ASK" query for a triple pattern. Fast range counts are also exact range counts under some common deployment configurations.

Fast range counts are fast. They use two key probes to find the ordinal index of the from and to key for the access path and then report (toIndex-fromIndex). This is orders of magnitude faster than you can achieve in SPARQL using a construction like "SELECT COUNT (*) { ?s ?p ?o }" because the corresponding SPARQL query must actually visit each tuple in that key range, rather than just reporting how many tuples there are.

Fast range counts are exact when running against a BigdataSail on a local journal which has been provisioned without full read/write transactions. When full read/write transactions are enabled, the fast range counts will also report the "delete markers" in the index. In scale-out, the fast range counts are also approximate if the key range spans more than one shard (in which case you are talking about lot of data).

Note: This method is available in releases after version 1.0.2.

GET Request-URI ?ESTCARD&([s|p|o|c]=(uri|literal))+

Where uri and literal use the SPARQL syntax for fully specified URI and literals, as per #URI_or_Literal_Valued_Parameters e.g.,

<http://www.bigdata.com/>
"abc"
"abc"@en
"3"^^xsd:int

The quotation marks and angle brackets are necessary to distinguish between values that are Literals and values that are URIs.

The response is an XML document having the general structure:

<data rangeCount="5" milliseconds="12"/>

Where rangeCount is the mutation count.

Where milliseconds is the elapsed time for the operation.

For example, this will report a fast estimated range count for all triples or quads in the default KB instance:

curl -G -H 'Accept: application/xml' 'http://localhost:8080/bigdata/sparql' --data-urlencode ESTCARD

While this example will only report the fast range count for all triples having the specified subject URI:

curl -G -H 'Accept: application/xml' 'http://localhost:8080/bigdata/sparql' --data-urlencode ESTCARD --data-urlencode 's=<http://www.w3.org/People/Berners-Lee/card#i>'

INSERT

INSERT RDF (POST with Body)

POST Request-URI
...
Content-Type: 
...
BODY

Perform an HTTP-POST, which corresponds to the basic CRUD operation "create" according to the generic interaction semantics of HTTP REST.

Where BODY is the new RDF content using the representation indicated by the Content-Type.

You can also specify a context-uri request parameter which sets the default context when triples data are loaded into a quads store (available in releases after 1.0.2).

For example, the following command will POST the local file 'data-1.nq' to the default KB.

curl -X POST -H 'Content-Type:text/x-nquads' --data-binary '@data-1.nq' http://localhost:8080/bigdata/sparql

INSERT RDF (POST with URLs)

POST Request-URI ?uri=URI

Where URI identifies a resource whose RDF content will be inserted into the database. The uri query parameter may occur multiple times. All identified resources will be loaded in a single operation. See [1] for the mime types understood by this operation.

You can also specify a context-uri request parameter which sets the default context when triples data are loaded into a quads store (available in releases after 1.0.2).

For example, the following command will load the data from the specified URI into the default KB instance. For this command, the uri parameter must be a resource that can be resolved by the server that will execute the INSERT operation. Typically, this means either a public URL or a URL for a file in the local file system on the server.

curl -X POST --data-binary 'uri=file:///Users/bryan/Documents/workspace/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/resources/data/foaf/data-0.nq' http://localhost:8080/bigdata/sparql

DELETE

DELETE with Query

DELETE Request-URI ?query=...

Where query is a CONSTRUCT or DESCRIBE query.

Note: The QUERY + DELETE operation is ACID.

DELETE with Body (using POST)

POST Request-URI ?delete
...
Content-Type
...
BODY

This is a POST because many APIs do not allow a BODY with a DELETE verb. The BODY contains RDF statements according to the specified Content-Type. Statements parsed from the BODY are deleted.

DELETE with Access Path

Note: This method is available in releases after version 1.0.2.

DELETE Request-URI ?([s|p|o|c]=(uri|literal))+

Where uri and literal use the SPARQL syntax for fully specified URI and literals, as per #URI_or_Literal_Valued_Parameters e.g.,

<http://www.bigdata.com/>
"abc"
"abc"@en
"3"^^xsd:int

The quotation marks and angle brackets are necessary to distinguish between values that are Literals and values that are URIs.

All statements matching the bound values of the subject (s), predicate (p), object (o), and/or context (c) position will be deleted from the database. Each position may be specified at most once, but more than one position may be specified. For example:

So, a DELETE of everything for a given context would be:

DELETE Request-URI ?c=<http://example.org/foo>

And a DELETE of everything for some subject and predicate would be:

DELETE Request-URI ?s=<http://example.org/s1>&p=<http://www.example.org/p1>

And to DELETE everything having some object value:

DELETE Request-URI ?o="abc"

or

DELETE Request-URI ?o="5"^^<datatypeUri>

And to delete everything at that end point:

DELETE Request-URI 

For example, the following will delete all statements with the specified subject in the default KB instance.

CAUTION: This curl command is tricky. If you specify just -x DELETE without the --get then it will ignore the ?s parameter and remove EVERYTHING in the default KB instance!

curl --get -X DELETE -H 'Accept: application/xml' 'http://localhost:8080/bigdata/sparql' --data-urlencode 's=<http://www.w3.org/People/Berners-Lee/card#i>'

UPDATE (SPARQL 1.1 UPDATE)

POST Request-URI ?update=...
parameter definition
using-graph-uri Specify zero or more graphs whose RDF merge is the default graph for the update request (protocol option with the same semantics as USING).
using-named-graph-uri Specify zero or more named graphs for this the update request (protocol option with the same semantics as USING NAMED).

See SPARQL 1.1 Protocol.

Note: This method is available in releases after version 1.1.0.

For example, the following SPARQL 1.1 UPDATE request would drop all existing statements in the default KB instance and then load data into the default KB from the specified URL:

curl -X POST http://localhost:8080/bigdata/sparql --data-urlencode 'update=DROP ALL; LOAD <file:/Users/bryan/Documents/workspace/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/resources/data/foaf/data-0.nq.gz>;'

UPDATE (DELETE + INSERT)

UPDATE (DELETE statements selected by a QUERY plus INSERT statements from Request Body using PUT)

PUT Request-URI ?query=...
...
Content-Type
...
BODY

Where query is a CONSTRUCT or DESCRIBE query.

Note: The QUERY + DELETE operation is ACID.

Note: You MAY specify a CONSTRUCT query with an empty WHERE clause in order to specify a set of statements to be removed without reference to statements already existing in the database. For example:

CONSTRUCT { bd:Bryan bd:likes bd:RDFS } { }

Note the trailing "{ }" which is the empty WHERE clause. This makes it possible to delete arbitrary statements followed by the insert of arbitrary statements.

parameter definition
context-uri Request parameter which sets the default context when triples data are loaded into a quads store (available in releases after 1.0.2).

UPDATE (POST with Multi-Part Request Body)

POST Request-URI ?updatePost
...
Content-Type: multipart/form-data; boundary=...
...
form-data; name="remove"
Content-Type: ...
Content-Body
...
form-data; name="add"
Content-Type: ...
Content-Body
...
BODY

You can specify two sets of serialized statements - one to be removed and one to be added. This operation will be ACID on the server.

parameter definition
context-uri Request parameter which sets the default context when triples data are loaded into a quads store (available in releases after 1.0.2).

STATUS

GET /status

Various information about the SPARQL end point. URL Query parameters include:

parameter definition
showQueries(=details) Show information on all queries currently executing on the NanoSparqlServer. The queries will be arranged in descending order by their elapsed evaluation time. When the value of this query parameter is "details", the response will include the query evaluation metrics for each bop (bigdata operator) in the query. Otherwise only the query evaluation metrics for the top-level query bop in the query plan will be included. In either case, the reported metrics are updated each time the page is refreshed so it is possible to track the progress of a long running query in this manner.
queryId=UUID Request information only for the specified query(s). This parameter may appear zero or more times. (Since bigdata 1.1).

CANCEL

For the default namespace:

POST /bigdata/sparql/?cancelQuery&queryId=....

For a caller specified namespace:

POST /bigdata/namespace/sparql/?cancelQuery&queryId=....

Cancel one or more running query(s). Queries which are still running when the request is processed will be cancelled. (Since bigdata 1.1. Prior to bigdata 1.2, this method was available at /status. The preferred URI for this method is now the URI of the SPARQL end point. The /status URI is deprecated for this method.)

See the queryId QueryHint.

parameter definition
queryId=UUID The UUID of a running query.

For example, for the default namespace:

curl -X POST http://localhost:8091/bigdata/sparql --data-urlencode 'cancelQuery' --data-urlencode 'queryId=a7a4b8e0-2b14-498c-94ab-9d79caddb0f6'

For a caller specified namespace:

curl -X POST http://localhost:8091/bigdata/namespace/kb/sparql --data-urlencode 'cancelQuery' --data-urlencode 'queryId=a7a4b8e0-2b14-498c-94ab-9d79caddb0f6'

Multi-Tenancy API

The Multi-Tenancy API allows you to administer and access multiple triple or quad store instances in a single backing Journal or Federation. Each triple or quad store instance has a unique namespace and corresponds to the concept of a VoID Dataset. A brief VoID description is used to describe the known data sets. A detailed VoID description is included in the Service Description of a data set. The default data set is associated with the namespace "kb" (unless you override that on the NanoSparqlServer command line). The SPARQL end point for a data set may be used to obtain a detailed Service Description of that data set (including VoID metadata and statistics), to issue SPARQL 1.1 Query and Update requests, etc. That end point is:

/bigdata/namespace/NAMESPACE/sparql

where NAMESPACE is the namespace of the desired data set.

This feature is available in bigdata releases after 1.2.2.

DESCRIBE DATA SETS

GET /bigdata/namespace

Obtain a brief VoID description of the known data sets. The description includes the namespace of the data set and its sparql end point. A more detailed service description is available from the sparql end point. The response to this request MAY be cached.

For example:

curl localhost:8090/bigdata/namespace

CREATE DATA SET

Request:

POST /bigdata/namespace
...
Content-Type
...
BODY

Response:

HTTP/1.1 201 Created
Content-Type: text/plain; charset=ISO-8859-1
Location: http://localhost:8080/bigdata/namespace/NAMESPACE/sparql
Content-Length: ...
CREATED: NAMESPACE

Status codes (since 1.3.2)

Status Code Meaning
201 Created
409 Conflict (Namespace exists).

The Location header in the response provides a URL for the newly created SPARQL end point. This URL may be used to obtain a service description, issue queries, issue updates, etc.

Create a new data set (aka a KB instance). The data set is configured based on the inherited configuration properties as overridden by the properties specified in the request entity (aka the BODY). The Content-Type must be one of those recognized for Java properties (the supported MIME Types are specified at NanoSparqlServer#Property_set_data).

You MUST specify at least the following property in order to create a non-default data set:

com.bigdata.rdf.sail.namespace=NAMESPACE

where NAMESPACE is the name of the new data set.

See the javadoc for the BigdataSail and AbstractTripleStore for other configuration options. Also see the sample property files in bigdata-sails/src/samples.

Note: You can not reconfigure the Journal or Federation using this method. The properties will only be applied to the newly created data set. This method does NOT create a new backing Journal, it just creates a new data set on the same Journal (or on the same Federation when running on a cluster).

For example:

curl -v -X POST --data-binary @tmp.xml --header 'Content-Type:application/xml' http://localhost:8090/bigdata/namespace

where tmp.xml is patterned after one of the examples below. Be sure to replace MY_NAMESPACE with the namespace of the KB instance that you want to create. The new KB instance will inherit any defaults specified when the backing Journal or Federation was created. You can override any inherited properties by specifying a new value for that property with the request.

Quads

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<!-- -->
<!-- NEW KB NAMESPACE (required). -->
<!-- -->
<entry key="com.bigdata.rdf.sail.namespace">MY_NAMESPACE</entry>
<!-- -->
<!-- Specify any KB specific properties here to override defaults for the BigdataSail -->
<!-- AbstractTripleStore, or indices in the namespace of the new KB instance. -->
<!-- -->
<entry key="com.bigdata.rdf.store.AbstractTripleStore.quads">true</entry>
</properties>

Triples + Inference + Truth Maintenance

To setup a KB that supports incremental truth maintenance use the following properties.

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<!-- -->
<!-- NEW KB NAMESPACE (required). -->
<!-- -->
<entry key="com.bigdata.rdf.sail.namespace">MY_NAMESPACE</entry>
<!-- -->
<!-- Specify any KB specific properties here to override defaults for the BigdataSail -->
<!-- AbstractTripleStore, or indices in the namespace of the new KB instance. -->
<!-- -->
<entry key="com.bigdata.rdf.store.AbstractTripleStore.quads">false</entry>
<entry key="com.bigdata.rdf.store.AbstractTripleStore.axiomsClass">com.bigdata.rdf.axioms.OwlAxioms</entry>
<entry key="com.bigdata.rdf.sail.truthMaintenance">true</entry>
</properties>

LIST PROPERTIES

GET /bigdata/namespace/NAMESPACE/properties

Obtain a list of the effective configuration properties for the data set named NAMESPACE.

For example, retrieve the configuration for a specified KB in either the text/plain or XML format.

curl --header 'Accept: text/plain' http://localhost:8090/bigdata/namespace/kb/properties
curl --header 'Accept: application/xml' http://localhost:8090/bigdata/namespace/kb/properties

DESTROY DATA SET

DELETE /bigdata/namespace/NAMESPACE

Destroy the data set identified by NAMESPACE.

For example:

curl -X DELETE http://localhost:8090/bigdata/namespace/kb

Transaction Management

This section is under development. We will be exposing support for sequences of read/write operations that are isolated by a common transaction through the REST API in 1.5.2. See http://trac.bigdata.com/ticket/1156 for details.

Choosing the right transaction model

Blazegraph supports two basic transaction models: unisolated and isolated. This choice is made on a namespace by namespace basis, but it may be defaulted for the Journal. To isolated operations specify the following option for a namespace.

com.bigdata.rdf.sail.isolatableIndices=true

- Unisolated operations write on the live index objects. This option provides better scalability and better throughput because the unisolated transaction does not need to buffer its write set. The mutations are simply applied to the indices. If the transaction commits, the indices are checkpointed. If the transaction aborts, then write set is discarded.

- Isolated operations rely on a fused view of the indices. An isolating index is established by the transaction in front of each index on which it needs to write. Mutations are written onto the isolating index. The writes are initially buffered in memory, but they will spill onto the disk for large transactions. When the transaction prepares, the write set is validated against the then current state of the unisolated indices. If there are no write conflicts, or if all conflicts can be reconciled, then the transaction can commit. Otherwise it must abort. Note that blazegraph does not support truth maintenance for namespaces that use isolated operations.

Note: Query always using snapshot isolation regardless of which transaction model you choose. These read-only views are completely non-blocking which is why blazegraph has such good performance for concurrent query. Since all queries have snapshot isolation, creating an explicit read-only transaction is only useful when more than one query needs to be run against the same commit point and there are concurrent writes on the database. Further, creating an explicit transaction incurs significant overhead due to the additional messages (CREATE-TX, QUERY, ABORT-TX) vs (QUERY).

Note: Transactions are scoped to the database, not the namespace. Thus a transaction MAY be used to coordinate operations across multiple namespaces.

Note: Open transactions pin the commit point on which the transaction is reading. Thus, long running transactions can prevent recycling.

Group Commit and Transactions

Group commit allows multiple write sets associated with different isolated or unisolated transactions to be melded into a single commit point. Group commit relies on a heirarchical locking scheme to serialize unisolated mutation operations for the same namespace. If isolated operations are being used, then group commit does not come into play until the transaction attempts to commit.

High Availability and Transactions

Each HAJournalServer has a local transaction manager. These transaction managers do not interchange messages as transactions are created and destroyed. This is key to achieving perfect linear scaling in query throughput as a function of the size of the replication cluster. During the 2-phase ACID commit, the nodes in the quorum communicate to identify the new consensus around the release time during the commit protocol. This consensus release time is used to decide which commit points are pinned and which can be recycled.

Transactions created on one node are NOT also registered on the other nodes. Further, the protocol for resynchronization of a node does not consider resynchronization of the transaction manager state since all transaction managers are completely independent. Thus, transactions created on a given HAJournalServer may be used on that HAJournalServer but are not visible on other HAJournalServer instances. However, commit times are the same for all nodes and all nodes will have a consensus about the release time so a commit time that is pinned on the leader (by a transaction) will be visible on the other nodes as well. Thus, the client can create a transaction (CREATE-TX) on the leader and use the readsOnCommitTime reported for that transaction to load balance queries across all nodes in the replication cluster. Those reads will have snapshot isolation in terms of the commit point pinned by the transaction until the transaction is either aborted (ABORT-TX) or committed (COMMIT-TX).

The practical impact is:

- Clients MUST use the leader to coordinate transactions (CREATE-TX, PREPARE-TX, IS-ACTIVE-TX, COMMIT-TX, ABORT-TX). - Transaction identifiers (txId values) are created and managed by the leader. These txIds are NOT visible to the followers. - Mutation operations isolated by a transaction MUST be directed to the leader (this is the same when transactions are not used - only the leader accepts writes). - The readsOnCommitTime (see CREATE-TX) MAY be used to load balance read operations across the leader and followers (see above for details). - Transactions break if there is a leader failover event or quorum break

Scale-out and Transactions

Mutation operations in scale-out are shard-wise ACID and use the unisolated connection internally. Mutations are typically applied using an eventually consistent model. If the update fails, it is reapplied.

Scale-out supports snapshot isolation for query. The recommended pattern is to periodically update a global read lock to pin a globally consistent commit point. Queries are then issued against the commit time associated with the read lock. This removes the (significant) overhead of coordination with the transaction service in scale-out on a per-query basis.

Transaction Management API

This API is only required for isolated operations where the client wishes to have the life cycle of a transaction span multiple client operations (for example, more than one query, more than one update, or some combination of queries and updates). In this case, the client follows a pattern:

POST /bigdata/tx => txId

doWork(txId)....

POST /bigdata/tx/txid?COMMIT

Note: GET is not allowed for most transaction management methods to defeat http caching.

Response Entity

The general form of the response entity is an XML document having the following structure:

<xml
><response elapsed="..."
><tx txId="..." readsOnCommitTime="..." readOnly="true|false" 
/></xml>

The response entity is an XML document. Depending on the operation there may be one or more than one tx elements in the request. For example, LIST-TX reports all active transactions.

The attributes of the response element are as follows:

Attribute Meaning
elapsed The elapsed time (milliseconds) to process the request on the server.

The attributes of the tx element are as follows:

Attribute Meaning
txId The transaction identifier. This must be used with the transaction API.
readsOnCommitTime The timestamp associated with the commit point on which the transaction is reading. This commit point (and all more recent commit points) are pinned by the transaction until it either aborts or commits.
readOnly "true" if the transaction is read-only and "false" if the transaction allows mutation.

Note: Either the txId or the readsOnCommitTime may be used for the &timestamp=... parameter on the REST API methods. However, in a Highly Available replication cluster the readsOnCommitTime MAY be used to load balance read operations across the cluster while only the leader will be able to interpret the txId.

LIST-TX

Obtain a list of active transactions.

GET /bigdata/tx

For example:

curl localhost:8090/bigdata/tx

A typical response:

HTTP/1.1 200 Ok
Location: http://localhost:8080/bigdata/tx/txId
Content-Type: application/xml
Content-Length: ...
<xml
><tx txId="..." readsOnCommitTime="..." readOnly="true|false"
><tx txId="..." readsOnCommitTime="..." readOnly="true|false"
><tx txId="..." readsOnCommitTime="..." readOnly="true|false"
><tx txId="..." readsOnCommitTime="..." readOnly="true|false"
><tx txId="..." readsOnCommitTime="..." readOnly="true|false"
/></xml>

CREATE-TX

Return a new transaction identifier.

POST /bigdata/tx(?timestamp=TIMESTAMP)

The timestamp parameter is a long (64-bit) integer. It meaning is defined as follows and defaults to 0 (UNISOLATED). Note that 0 corresponds to ITx.UNISOLATED and -1 corresponds to ITx.READ_COMMITTED. See ITransactionService for more details on the semantics of these symbolic constants.

what definition
0 This requests a new read/write transaction. The transaction will read on the last commit point on the database at the time that the transaction was created. This is the default behavior if the timestamp parameter is not specified. Note: The federation architecture (aka scale-out) does NOT support distributed read/write transactions - all mutations in scale-out are shard-wise ACID.
-1 This requests a new read-only transaction. The transaction will read on the last commit point on the database at the time that the transaction was created.
timestamp This requests a new read-only transaction. The operation will be executed against the most recent committed state whose commit timestamp is less than or equal to timestamp.

A typical response is below. Note that the Location header will include the URI for the transaction while the transaction identifier is given in the response entity. The response entity is an XML document as defined above.

HTTP/1.1 201 Created
Location: http://localhost:8080/bigdata/tx/txId
Content-Type: application/xml
Content-Length: ...
<xml><tx txId="..." readsOnCommitTime="..." readOnly="true|false"/></xml>

Status codes

Status Code Meaning
201 Created
400 Bad request if the TIMESTAMP is a negative value other than -1 (READ_COMMITTED).

STATUS-TX

Obtain status about the transaction. Note that committed and aborted transactions may no longer exist on the server.

POST /bigdata/tx/txId?STATUS

Status codes

Status Code Meaning
200 The transaction was found on the server.
404 The transaction was not found on the server.

The response entity is an XML document as defined above.

ABORT-TX

Aborts the transaction. The write set of the transaction (if any) is discarded. The transaction is no longer active.

POST /bigdata/tx/txId?ABORT

Status codes

Status Code Meaning
200 The transaction was aborted.
404 The transaction was not found on the server.

The response entity is an XML document as defined above.

PREPARE-TX

Returns true is the write set of the transaction passes validation. If it does not pass validation, then a COMMIT-TX message will fail and the transaction must be aborted by the client.

POST /bigdata/tx/txId?PREPARE

Status codes

Status Code Meaning
200 The transaction was validated.
404 The transaction was not found on the server.
409 Validation failed for the transaction.

The response entity is an XML document as defined above.

COMMIT-TX

Prepares and commits the transaction. This message first performs validation. If validation is not successful, then the transaction can not be committed and a failure message is retturn. If the transaction was successfully validated, then it is melded into the next commit group and a success message is returned. Once the transaction commits it is no longer active.

POST /bigdata/tx/txId?COMMIT

Status codes

Status Code Meaning
200 The transaction was validated and committed.
404 The transaction was not found on the server.
409 Validation failed for the transaction.

The response entity is an XML document as defined above.

TODO

TODO Define the post-condition of PREPARE when the transaction fails validation (has the transaction write set been discarded? Is the transaction aborted?) Update the client IRemoteTx API and implementation to be in compliance with these post-condition definitions.

TODO Document redo patterns for the client when a transaction fails validation (and write tests for those patterns).

TODO Update the "Java Client API" section per the javadoc at BigdataSailRemoteRepository. Do this when we merge back to the master.

TODO Update the TxGuide. That page is really about internals and concepts. Have it point to this section?

TODO Scale-out does not report the readsOnCommitTime per trac #266. This is making the internal API a bit cumbersome. Consider reporting as -1L for scale-out until #266 is resolved and documenting this in the API.

TODO Introduce a timeout for transactions? Open transactions pin the commit point on which they read. If a client accidentally leaves open a transaction (e.g., by dying or becoming disconnected from the server) then recycling will be disabled. Transactions without ongoing client activity should probably be timed out. Also, it may make sense to impose a timeout on transactions and to allow the client to request a timeout allocation when it creates the transaction.

TODO Add an age attribute to the XML response entity for the tx element. This would make it possible to identify the longest running transactions. The readsOnCommitTime can identify the earliest pinned commit point, but this is not really the same. For example, all transactions against a read-only end point (and why would you bother ...) could have the same readsOnCommitTime if they read on the same commit point while some could have been open for days and others only milliseconds. The Tx class does not currently record when a transaction is created, but this could be changed easily enough.

Java Client API

We have added a Java API for clients to the NanoSparqlServer. The main REST API is contained in the class:

com.bigdata.rdf.sail.webapp.client.RemoteRepository

And the test case "com.bigdata.rdf.sail.webapp.TestNanoSparqlClient" demonstrates how to use the API.

The Multi-Tenancy API is contained in the class:

com.bigdata.rdf.sail.webapp.client.RemoteRepositoryManager

See JettyHttpClient for more details about the jetty client integration.

Query Optimization

There are several ways to get information about running query evaluation plans.

  1. The #STATUS page has a showQueries=(details) option which provides in depth information about the SPARQL query, Abstract Syntax Tree, bigdata operators (bops) and running statistics on current queries.
  2. The #QUERY ?explain parameter may be used with a query to report essentially the same information as the #STATUS page in an HTML response.

Performance Optimization resources

  1. There is a also good write up on query performance optimization on the blog [2].
  2. There is a section on performance optimization for bigdata on the wiki PerformanceOptimization.
  3. Bigdata supports a variety of query hints through both the SAIL and the NanoSparqlServer interfaces. See [3] for more details.
  4. Bigdata supports query hints using magic triples (since 1.1.0). See QueryHints.