In Memory Property Graph Server using the Shared Nothing design from Seastar.
- Faster than a speeding bullet train
- Connect from anywhere via HTTP
- Use the Lua programming language to query
- Apache License, version 2.0
Bring up the main website and documentation ragedb.com while you spin up an instance on docker right now:
docker pull dockerhub.ragedb.com/ragedb/ragedb
docker run -u 0 -p 127.0.0.1:7243:7243 --name ragedb -t dockerhub.ragedb.com/ragedb/ragedb:latest --cap-add=sys_nice
If you are running Docker on a Mac or Windows Host, you may see this error message:
WARNING: unable to mbind shard memory; performance may suffer:
Run Docker on a Linux host for the best performance.
- This is work in progress, if you can make this better please help!
- Generate an SSH key if you don't already have one with
ssh-keygen -t rsa -b 4096
. - Install Terraform
- Install the AWS CLI.
- Configure the AWS CLI with an access key ID and secret access key.
- Which AWS Region. The options are here.
- E.g.
eu-west-2
.
- This will be in
~/.ssh/id_rsa.pub
by default.
- Run
terraform init
. - Run
terraform apply
. - Wait a few minutes for the code to compile and the server to spin up.
- Copy the IP output by the previous command into your browser http://x.x.x.x:/7243
- Do Graphy Stuff.
- Irrecoverably shut everything down with
terraform destroy
.
This will bring up an r5.2xlarge with 4 cores set to 1 thread per core with 100 GB of space.
:POST /db/{graph}/restore
:DELETE /db/{graph}/schema
:GET /db/{graph}/schema/nodes
:GET /db/{graph}/schema/nodes/{type}
:POST /db/{graph}/schema/nodes/{type}
:DELETE /db/{graph}/schema/nodes/{type}
:GET /db/{graph}/schema/relationships
:GET /db/{graph}/schema/relationships/{type}
:POST /db/{graph}/schema/relationships/{type}
:DELETE /db/{graph}/schema/relationships/{type}
RageDB currently supports booleans, 64-bit integers, 64-bit doubles, strings and lists of the preceding data types:
boolean, integer, double, string, boolean_list, integer_list, double_list, string_list
:GET /db/{graph}/schema/nodes/{type}/properties/{property}
:POST /db/{graph}/schema/nodes/{type}/properties/{property}/{data_type}
:DELETE /db/{graph}/schema/nodes/{type}/properties/{property}
:GET /db/{graph}/schema/relationships/{type}/properties/{property}
:POST /db/{graph}/schema/relationships/{type}/properties/{property}/{data_type}
:DELETE /db/{graph}/schema/relationships/{type}/properties/{property}
:GET /db/{graph}/nodes?limit=100&skip=0
:GET /db/{graph}/nodes/{type}?limit=100&skip=0
:GET /db/{graph}/node/{type}/{key}
:GET /db/{graph}/node/{id}
:POST /db/{graph}/node/{type}/{key}
JSON formatted Body: {properties}
:DELETE /db/{graph}/node/{type}/{key}
:DELETE /db/{graph}/node/{id}
:POST /db/{graph}/nodes/{type}/{property}/{operation}?limit=100&skip=0 {json value}
:GET /db/{graph}/node/{type}/{key}/properties
:GET /db/{graph}/node/{id}/properties
:POST /db/{graph}/node/{type}/{key}/properties
JSON formatted Body: {properties}
:POST /db/{graph}/node/{id}/properties
JSON formatted Body: {properties}
:PUT /db/{graph}/node/{type}/{key}/properties
JSON formatted Body: {properties}
:PUT /db/{graph}/node/{id}/properties
JSON formatted Body: {properties}
:DELETE /db/{graph}/node/{type}/{key}/properties
:DELETE /db/{graph}/node/{id}/properties
:GET /db/{graph}/node/{type}/{key}/property/{property}
:GET /db/{graph}/node/{id}/property/{property}
:PUT /db/{graph}/node/{type}/{key}/property/{property}
JSON formatted Body: {property}
:PUT /db/{graph}/node/{id}/property/{property}
JSON formatted Body: {property}
:DELETE /db/{graph}/node/{type}/{key}/property/{property}
:DELETE /db/{graph}/node/{id}/property/{property}
:GET /db/{graph}/relationships?limit=100&skip=0
:GET /db/{graph}/relationships/{type}?limit=100&skip=0
:GET /db/{graph}/relationship/{id}
:POST /db/{graph}/node/{type_1}/{key_1}/relationship/{type_2}/{key_2}/{rel_type}
JSON formatted Body: {properties}
:POST /db/{graph}/node/{id_1}/relationship/{id_2}/{rel_type}
JSON formatted Body: {properties}
:DELETE /db/{graph}/relationship/{id}
:POST /db/{graph}/relationships/{type}/{property}/{operation}?limit=100&skip=0 {json value}
:GET /db/{graph}/node/{type}/{key}/relationships
:GET /db/{graph}/node/{type}/{key}/relationships/{direction [all, in, out]}
:GET /db/{graph}/node/{type}/{key}/relationships/{direction [all, in, out]}/{type TYPE_ONE}
:GET /db/{graph}/node/{type}/{key}/relationships/{direction [all, in, out]}/{type(s) TYPE_ONE&TYPE_TWO}
:GET /db/{graph}/node/{id}/relationships
:GET /db/{graph}/node/{id}/relationships/{direction [all, in, out]}
:GET /db/{graph}/node/{id}/relationships/{direction [all, in, out]}/{type TYPE_ONE}
:GET /db/{graph}/node/{id}/relationships/{direction [all, in, out]}/{type(s) TYPE_ONE&TYPE_TWO}
:GET /db/{graph}/relationship/{id}/properties
:POST /db/{graph}/relationship/{id}/properties
JSON formatted Body: {properties}
:PUT /db/{graph}/relationship/{id}/properties
JSON formatted Body: {properties}
:DELETE /db/{graph}/relationship/{id}/properties
:GET /db/{graph}/relationship/{id}/property/{property}
:PUT /db/{graph}/relationship/{id}/property/{property}
JSON formatted Body: {property}
:DELETE /db/{graph}/relationship/{id}/property/{property}
:GET /db/{graph}/node/{type}/{key}/degree
:GET /db/{graph}/node/{type}/{key}/degree/{direction [all, in, out]}
:GET /db/{graph}/node/{type}/{key}/degree/{direction [all, in, out]}/{type TYPE_ONE}
:GET /db/{graph}/node/{type}/{key}/degree/{direction [all, in, out]}/{type(s) TYPE_ONE&TYPE_TWO}
:GET /db/{graph}/node/{id}/degree
:GET /db/{graph}/node/{id}/degree/{direction [all, in, out]}
:GET /db/{graph}/node/{id}/degree/{direction [all, in, out]}/{type TYPE_ONE}
:GET /db/{graph}/node/{id}/degree/{direction [all, in, out]}/{type(s) TYPE_ONE&TYPE_TWO}
:GET /db/{graph}/node/{type}/{key}/neighbors
:GET /db/{graph}/node/{type}/{key}/neighbors/{direction [all, in, out]}
:GET /db/{graph}/node/{type}/{key}/neighbors/{direction [all, in, out]}/{type TYPE_ONE}
:GET /db/{graph}/node/{type}/{key}/neighbors/{direction [all, in, out]}/{type(s) TYPE_ONE&TYPE_TWO}
:GET /db/{graph}/node/{id}/neighbors
:GET /db/{graph}/node/{id}/neighbors/{direction [all, in, out]}
:GET /db/{graph}/node/{id}/neighbors/{direction [all, in, out]}/{type TYPE_ONE}
:GET /db/{graph}/node/{id}/neighbors/{direction [all, in, out]}/{type(s) TYPE_ONE&TYPE_TWO}
:POST db/{graph}/lua
STRING formatted Body: {script}
The script must end in one or more values that will be returned in JSON format inside an Array. Within the script the user can access to graph functions. For example:
-- Get some things about a node
a = NodeGetId("Node","Max")
b = NodeTypesGetCount()
c = NodeTypesGetCount("Node")
d = NodeGetProperty("Node", "Max", "name")
e = NodeGetProperty(a, "name")
a, b, c, d, e
A second example:
-- get the names of nodes I have relationships with
names = {}
ids = NodeGetLinks("Node", "Max")
for k=1,#ids do
v = ids[k]
table.insert(names, NodeGetProperty(v:getNodeId(), "name"))
end
names
RageDB uses Seastar which only runs on *nix servers (no windows or mac) so use your local linux desktop or use EC2.
On EC2 launch an instance:
Step 1: Choose an Amazon Machine Image
Ubuntu Server 20.04 LTS(HVM), SSD Volume Type - ami-09e67e426f25ce0d7
Step 2: Choose Instance Type
r5.2xlarge
Step 3: Configure Instance
Specify CPU options
Threads per core = 1
Step 4: Add Storage
100 GiB
Launch
Once the instance is running, connect to it and start a "screen" session, then follow these steps:
First let's update and upgrade to the latest versions of local software:
sudo apt-get update && sudo apt-get dist-upgrade
Install Seastar (this will take a while, that's why we are using screen):
git clone https://github.com/scylladb/seastar.git
cd seastar
sudo ./install_dependencies.sh
./configure.py --mode=release --prefix=/usr/local
sudo ninja -C build/release install
Install Additional Dependencies
sudo apt-get install -y ccache python3-pip
Install conan
pip install --user conan
sudo ln -s ~/.local/bin/conan /usr/bin/conan
Install LuaJIT
sudo apt-get install -y luajit luajit-5.1-dev
Install RageDB
git clone https://github.com/ragedb/ragedb.git
cd ragedb
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build . --target ragedb
cd bin
sudo ./ragedb
If you get errors regarding conan locks, run:
conan remove --locks
If you get errors like:
Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error
Then this should fix it:
echo 0 > /proc/sys/kernel/perf_event_paranoid
To make it permanent edit /etc/sysctl.conf by adding:
kernel.perf_event_paranoid = 0
If you get errors about aio-max-nr you'll want to increase it:
sudo echo 88208 > /proc/sys/fs/aio-max-nr
We allocate 11026 aio slots per shard (10000 + 1024 + 2) so 8 shards = 88208
To make it permanent edit /etc/sysctl.conf by adding:
fs.aio-max-nr = 88208
- Command Logging (started)
- Recovering (started)
- Snapshots
- Replication
- Metrics
- Everthing Else
- Take over the world
- When running KHop queries on combining the RoaringBitmaps using |=
- When importing data on the power of two growth of the tsl sparse_map
- In GetToKeysFromRelationshipsInCSV
- In PartitionRelationshipsInCSV
- Allow Node and Relationship Type handlers to take a json map defining the property keys and data types
- Allow additional data types: 8, 16 and 32 bit integers, 32 bit floats, byte, list of bytes, nested types
- NodeTypes and RelationshipTypes should allow type deletion and type id reuse
- Allow property type conversions (int to double, string to int, int to int array, etc).
pvs-studio-analyzer trace -- make
pvs-studio-analyzer analyze
plog-converter -a GA:1,2 -t tasklist PVS-Studio.log