Bulk Loading data into JanusGraph — Part 2

In the last post, we configured and started JanusGraph server with ConfigurationGraphFactory and defined our schema and created index.

In this post, we will first go over some of the configuration option we chose when creating our custom configuration graph. Later, we will also look at the Apache Spark code to insert data into JanusGraph in bulk fashion.

Configuration Options

Let’s look at some of the configuration options we chose for our graph.

There are lots of different config options available. I would highly encourage to read and experiment as per your need. For a full list of configuration options refer here.

Connecting to JanusGraph

Even though JanusGraph provides support for any language for which Tinkerpop driver exists like Java, Python, or C#. I tried working with python and Java and found out that is much easier working with Java as it is possible to embed JanusGraph as a library inside a Java application. But for python there is no JanusGraph library available and you will have to work with gremlin-python distribution. I will cover go over connectivity with both Java and Python.

Connecting from Java

  1. Add following dependencies to pom.xml

Even though JanusGraph documentation does not mention adding core, cql and es dependencies. You will have to add them because we are using Scylla as storage backend with indexing on vertex and edge.

2. Creating the connection using JanusGraphFactory builder

3. Create Apache Spark based bulk load code similar to this

The sample code above uses the spark foreachpartition and creates a graph instance and opens up a transaction for each partition. After inserting all the vertices and edges for each row within the partition it commits the transaction and closes the resources.

Connecting from Python

  1. There is no JanusGraph library for python language, we need to use gremlin-python package to connect. Install gremlin-python using command below.

2. Create a sample code using gremlin python and connect using remote driver

There is no support for opening a transaction to execute multiple queries in a single transaction in the gremlin-python library. Each record is executed in a single transaction and .next() commits the transaction. This is not going to be very efficient if you have to insert millions of records.


Going with the Java based Apache Spark code, I was able to get a write performance of approx. 30,000 qps and read performance of approx. 45,000 qps. Going with a relatively small infrastructure we could see the performance meeting our SLAs.


I started working with JanusGraph recently. Prior to this, I had experience working with Neo4j and Amazon Neptune. But with JanusGraph being open-source, there is not much help available. Plus most of the articles/help available is with older version and is obsolete now. But some of these articles and links were very helpful.

Happy Learning

PS: This is my very first article on medium.com. I thoroughly enjoyed writing and sharing this knowledge. Hopefully you will find it useful too :-)

Data Engineering at Amazon

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store