Quick Start

Quick Start Guide for the IceFireDB Database #

This guide walks you through the quickest way to get started with IceFireDB. For non-production environments, you can deploy your IceFireDB database by either of the following methods:

  1. Deploy a local test cluster,Simulate production deployment on a single machine

Deploy a local test cluster #

Scenario: Quickly deploy a local IceFireDB cluster for testing using a single macOS or Linux server. As a distributed system, in the same availability zone network, a basic IceFireDB cluster usually consists of 3 IceFireDB instances.

1.Download and compile the program #

git clone https://github.com/IceFireDB/IceFireDB.git IceFireDB-NoSQL

cd IceFireDB-NoSQL

make && ls ./bin/IceFireDB

If the following message is displayed, you have build IceFireDB-NoSQL successfully:

if [ ! -d "./bin/" ]; then \
mkdir bin; \
fi
go build -ldflags "-s -w -X \"main.BuildVersion=1c102f3\" -X \"main.BuildDate=2022-11-21 06:17:29\"" -o bin/IceFireDB .
./bin/IceFireDB

2.Declare the global environment variable #

IceFireDB implements many engines at the bottom, mainly including the following categories. The choice of the bottom engine is initialized through cmd variables.

Engine typecmd keycmd value
LevelDBstorage-backendgoleveldb
Badgerstorage-backendbadger
IPFSstorage-backendipfs
CRDT-KVstorage-backendcrdt
IPFS-LOGstorage-backendipfs-log
OrbitDBstorage-backendorbitdb
OSSstorage-backendoss

3.Start the cluster in the current session #


mkdir 6001 && mkdir 6002 && mkdir 6003

cp ./bin/IceFireDB ./6001
cp ./bin/IceFireDB ./6002
cp ./bin/IceFireDB ./6003

# start node1 
/pwd/IceFireDB-NoSQL/6001/IceFireDB -storage-backend ipfs-log  -n 1 -a 127.0.0.1:6001 --openreads

# start node2
/pwd/IceFireDB-NoSQL/6002/IceFireDB -storage-backend ipfs-log  -n 2 -a 127.0.0.1:6002 -j 127.0.0.1:6001 --openreads

# start node3
/pwd/IceFireDB-NoSQL/6003/IceFireDB -storage-backend ipfs-log  -n 3 -a 127.0.0.1:6003 -j 127.0.0.1:6001 --openreads

In the same network availability zone, multiple IceFireDB instances can be added to the same raft network, and the same raft network exposes the standard Redis cluster access interface to meet the access requirements of the Redis client.

4.Start a new session to access IceFireDB #

The above steps start three IceFireDB nodes and form a highly available network with each other.We can use redis-cli to observe the cluster status

sudo apt-get -y install redis-tools

redis-cli

cluster nodes

We execute the cluster nodes command in the redis-cli terminal, and we can view the cluster status as follows:

127.0.0.1:6002> cluster nodes
356a192b7913b04c54574d18c28d46e6395428ab 127.0.0.1:6001@6001 slave 77de68daecd823babbb58edb1c8e14d7106e83bb 0 0 connected 0-16383
da4b9237bacccdf19c0760cab7aec4a8359010b0 127.0.0.1:6002@6002 slave 77de68daecd823babbb58edb1c8e14d7106e83bb 0 0 connected 0-16383
77de68daecd823babbb58edb1c8e14d7106e83bb 127.0.0.1:6003@6003 master - 0 0 connected 0-16383

We use redis-cli for data read and write tests:

redis-cli -c -h 127.0.0.1 -p 6002

127.0.0.1:6002> set foo bar
-> Redirected to slot [0] located at 127.0.0.1:6003
OK

127.0.0.1:6003> get foo
"bar"

We can see that the data can be read and written normally,The current master is an instance of 6003. Since we have enabled the read data permission of all nodes, we can view data in other slave nodes.


redis-cli -h 127.0.0.1 -p 6001


127.0.0.1:6001> get foo
"bar"

# Although we can read data in the slave node, we cannot write data directly on the slave node.
127.0.0.1:6001> set foo2 bar2
(error) MOVED 0 127.0.0.1:6003

Advanced Eco Tools #

IceFireDB-Proxy: Intelligent network proxy #

In the above case, we fully demonstrated the cluster construction and data read-write access, but the master-slave relationship between high-availability nodes, data read-write fault tolerance of the client, and the perception of the status of each node in the cluster are complex, so We launched the IceFireDB-Proxy software, which can shield users from understanding the complexity of the IceFireDB high-availability cluster, and use the IceFireDB cluster like a single IceFireDB.