add 9 more databases (#12)

* add clickhouse, cockroach, meilisesarch, scylla, tarantool, tidb cluster, typesense, yugabyte
* Update README.md
* add aerospike
* update README.md aerospike
* remove www
* bugfix docs clickhouse link
pull/13/head^2
Kiswono Prayogo 3 years ago committed by GitHub
parent c9a2057ac0
commit 3f0661ad52
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

6
.gitignore vendored

@ -1,2 +1,8 @@
docker-compose.override.yml
.env
tidb/data
tidb/logs
tidb/config/*/
.idea
cockroach/cockroach1
cockroach/data

@ -5,14 +5,21 @@ A collection of Docker Compose files I've used to quickly spin up local database
# Included Databases
Database | Docker Compose Configuration | Website
---------- | ---------------------------- | ----------------------------------
----------- | ------------------------------- | ----------------------------------
Aerospike | [./aerospike](./aerospike) | <https://aerospike.com/>
ClickHouse | [./clickhouse](./clickhouse) | <https://clickhouse.tech/>
DynamoDB | [./dynamo](./dynamo/) | <https://aws.amazon.com/dynamodb/>
Fauna | [./fauna](./fauna/) | <https://fauna.com/>
MariaDB | [./maria](./maria/) | <https://mariadb.org/>
MongoDB | [./mongo](./mongo/) | <https://www.mongodb.com/>
MySQL | [./mysql](./mysql/) | <https://www.mysql.com/>
PostgreSQL | [./postgres](./postgres/) | <https://www.postgresql.org/>
MeiliSearch | [./meilisearch](./meilisearch/) | <https://meilisearch.com/>
MongoDB | [./mongo](./mongo/) | <https://mongodb.com/>
MySQL | [./mysql](./mysql/) | <https://mysql.com/>
PostgreSQL | [./postgres](./postgres/) | <https://postgresql.org/>
Redis | [./redis](./redis/) | <https://redis.io/>
ScyllDB | [./scylla](./scylla) | <https://scylladb.com/>
Tarantool | [./tarantool](./tarantool/) | <https://tarantool.io/>
TiDB | [./tidb](./tidb/) | <https://pingcap.com/>
YugaByteDB | [./yugabyte](./yugabyte) | <https://yugabyte.com/>
## Usage
@ -31,3 +38,9 @@ docker-compose down -v
## Contributions
If you have a Docker Compose configuration for a database not seen here, please consider making a pull request to add it!
## TODO
- add data volume binding for each database
- add all possible environment variables
- add example how to connect with client, with or without docker (have client program installed), and with go

@ -0,0 +1,14 @@
# Aerospike w/ Docker Compose
with default empty password
## Connecting with client
```
node=`docker ps | grep aerospike | cut -f 1 -d ' '`
docker exec -it $node aql
```
## Other docs
[aerospike docker](https://docs.aerospike.com/docs/deploy_guides/docker/orchestrate/)

@ -0,0 +1,66 @@
# Aerospike database configuration file.
# This stanza must come first.
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
pidfile /var/run/aerospike/asd.pid
service-threads 20 # Should be 5 times number of vCPUs if there is at least
# 1 SSD namespace, otherwise simply the number of vCPUs.
proto-fd-max 15000
}
logging {
# Log file must be an absolute path.
file /var/log/aerospike/aerospike.log {
context any info
}
# Send log messages to stdout
console {
context any info
}
}
network {
service {
address any
port 3000
}
heartbeat {
# mesh is used for environments that do not support multicast
mode mesh
port 3002
# use asinfo -v 'tip:host=<ADDR>;port=3002' to inform cluster of
# other mesh nodes
#mesh-port 3002
interval 150
timeout 10
}
fabric {
port 3001
}
}
namespace test {
replication-factor 2
memory-size 1G
# storage-engine memory
# To use file storage backing, comment out the line above and use the
# following lines instead.
storage-engine device {
file /opt/aerospike/data/test.dat
filesize 4G
data-in-memory true # Store data in memory in addition to file.
}
}

@ -0,0 +1,21 @@
version: "3.2"
services:
aerospikedb:
image: aerospike/aerospike-server:latest
deploy:
replicas: 1
endpoint_mode: dnsrr
labels:
com.aerospike.cluster: "myproject"
command: [ "--config-file","/run/secrets/aerospike.conf"]
secrets:
- source: conffile
target: aerospike.conf
mode: 0440
ports:
- "3000:3000"
secrets:
conffile:
file: ./aerospike.conf

@ -0,0 +1,20 @@
# MeiliSearch w/ Docker Compose
with default empty password
## Change password and other config
```
volumes:
- ./local.xml:/etc/clickhouse-server/config.d/local.xml
```
## Connecting with client
```
clickhouse-client
```
## Other docs
[clickhouse server docker](https://hub.docker.com/r/yandex/clickhouse-server/)

@ -0,0 +1,12 @@
version: "3.2"
services:
clickhouse:
image: yandex/clickhouse-server
ports:
- 8123:8123
- 9000:9000
ulimits:
nofile:
soft: 262144
hard: 262144

@ -0,0 +1,14 @@
# CockroachDB w/ Docker Compose
with default empty password
## Connecting with client
```
node=`docker ps | grep cockroach | cut -f 1 -d ' '`
docker exec -it $node cockroach sql --insecure
```
## Other docs
[cockroach docker](https://kb.objectrocket.com/cockroachdb/docker-compose-and-cockroachdb-1151)

@ -0,0 +1,12 @@
version: "3.2"
services:
cockroach:
image: cockroachdb/cockroach:latest
volumes:
- ./cockroach1:/cockroach/cockroach-data
command: start-single-node --insecure --accept-sql-without-tls
ports:
- "26257:26257"
- "8080:8080"

@ -0,0 +1,14 @@
# MeiliSearch w/ Docker Compose
with default empty key
## Change master key
```
set the key
command: ./meilisearch --master-key=masterKey
```
## Other docs
[installation with docker](https://docs.meilisearch.com/learn/getting_started/installation.html#download-and-launch)

@ -0,0 +1,7 @@
version: '3.2'
services:
meilisearch:
image: getmeili/meilisearch
ports:
- "7700:7700"

@ -0,0 +1,22 @@
# ScyllaDB w/ Docker Compose
with default empty username and password
## Username and Password
```
set the enviroment
TARANTOOL_USER_NAME
TARANTOOL_USER_PASSWORD
```
## Connecting with client
```
node=`docker ps | grep /scylla: | cut -f 1 -d ' '`
docker exec -it $node cqlsh
```
## Other docs
[scylla docker](https://docs.scylladb.com/operating-scylla/manager/1.3/run-in-docker/)

@ -0,0 +1,20 @@
version: "3.2"
services:
scylla-node1:
image: scylladb/scylla:4.4.0
command: --smp 1 --memory 750M --overprovisioned 1 --api-address 0.0.0.0
ports:
- 9042:9042
- 9142:9142
- 7000:7000
- 7001:7001
- 7199:7199
- 10000:10000
scylla-manager:
image: scylladb/scylla-manager
container_name: scylla-manager
depends_on:
- scylla-node1

@ -0,0 +1,21 @@
# Tarantool w/ Docker Compose
with default empty username and password
## Username and Password
```
set the enviroment
TARANTOOL_USER_NAME
TARANTOOL_USER_PASSWORD
```
## Connecting with client
```
tarantoolctl connect 3301
```
## Other docs
[tarantool/docker](https://github.com/tarantool/docker)

@ -0,0 +1,10 @@
version: '3.2'
services:
tarantool:
image: tarantool/tarantool:2.7.2
# x.x.0 = alpha, x.x.1 = beta, x.x.2+ = stable, latest not always stable
volumes:
- ./tarantool-data:/usr/local/share/tarantool
ports:
- "3301:3301"

@ -0,0 +1,21 @@
# TiDB Cluster w/ Docker Compose
with default empty username and password
## Setup
```
git clone --depth 1 https://github.com/pingcap/tidb-docker-compose.git
cd tidb-docker-compose
docker-compose up
```
## Connecting with client
```
mysql -u root -h 127.0.0.1 -P 4000
```
## Other docs
[tidb docker compose](https://github.com/pingcap/tidb-docker-compose)

@ -0,0 +1,86 @@
# PD Configuration.
name = "pd"
data-dir = "default.pd"
client-urls = "http://127.0.0.1:2379"
# if not set, use ${client-urls}
advertise-client-urls = ""
peer-urls = "http://127.0.0.1:2380"
# if not set, use ${peer-urls}
advertise-peer-urls = ""
initial-cluster = "pd=http://127.0.0.1:2380"
initial-cluster-state = "new"
lease = 3
tso-save-interval = "3s"
[security]
# Path of file that contains list of trusted SSL CAs. if set, following four settings shouldn't be empty
cacert-path = ""
# Path of file that contains X509 certificate in PEM format.
cert-path = ""
# Path of file that contains X509 key in PEM format.
key-path = ""
[log]
level = "error"
# log format, one of json, text, console
#format = "text"
# disable automatic timestamps in output
#disable-timestamp = false
# file logging
[log.file]
#filename = ""
# max log file size in MB
#max-size = 300
# max log file keep days
#max-days = 28
# maximum number of old log files to retain
#max-backups = 7
# rotate log by day
#log-rotate = true
[metric]
# prometheus client push interval, set "0s" to disable prometheus.
interval = "0s"
# prometheus pushgateway address, leaves it empty will disable prometheus.
address = ""
[schedule]
max-merge-region-size = 0
split-merge-interval = "1h"
max-snapshot-count = 3
max-pending-peer-count = 16
max-store-down-time = "30m"
leader-schedule-limit = 4
region-schedule-limit = 4
replica-schedule-limit = 8
merge-schedule-limit = 8
tolerant-size-ratio = 5.0
# customized schedulers, the format is as below
# if empty, it will use balance-leader, balance-region, hot-region as default
# [[schedule.schedulers]]
# type = "evict-leader"
# args = ["1"]
[replication]
# The number of replicas for each region.
max-replicas = 3
# The label keys specified the location of a store.
# The placement priorities is implied by the order of label keys.
# For example, ["zone", "rack"] means that we should place replicas to
# different zones first, then to different racks if we don't have enough zones.
location-labels = []
[label-property]
# Do not assign region leaders to stores that have these tags.
# [[label-property.reject-leader]]
# key = "zone"
# value = "cn1

@ -0,0 +1,239 @@
# TiDB Configuration.
# TiDB server host.
host = "0.0.0.0"
# TiDB server port.
port = 4000
# Registered store name, [tikv, mocktikv]
store = "mocktikv"
# TiDB storage path.
path = "/tmp/tidb"
# The socket file to use for connection.
socket = ""
# Run ddl worker on this tidb-server.
run-ddl = true
# Schema lease duration, very dangerous to change only if you know what you do.
lease = "0"
# When create table, split a separated region for it. It is recommended to
# turn off this option if there will be a large number of tables created.
split-table = true
# The limit of concurrent executed sessions.
token-limit = 1000
# Only print a log when out of memory quota.
# Valid options: ["log", "cancel"]
oom-action = "log"
# Set the memory quota for a query in bytes. Default: 32GB
mem-quota-query = 34359738368
# Enable coprocessor streaming.
enable-streaming = false
# Set system variable 'lower_case_table_names'
lower-case-table-names = 2
[log]
# Log level: debug, info, warn, error, fatal.
level = "error"
# Log format, one of json, text, console.
format = "text"
# Disable automatic timestamp in output
disable-timestamp = false
# Stores slow query log into separated files.
slow-query-file = ""
# Queries with execution time greater than this value will be logged. (Milliseconds)
slow-threshold = 300
# Queries with internal result greater than this value will be logged.
expensive-threshold = 10000
# Maximum query length recorded in log.
query-log-max-len = 2048
# File logging.
[log.file]
# Log file name.
filename = ""
# Max log file size in MB (upper limit to 4096MB).
max-size = 300
# Max log file keep days. No clean up by default.
max-days = 0
# Maximum number of old log files to retain. No clean up by default.
max-backups = 0
# Rotate log by day
log-rotate = true
[security]
# Path of file that contains list of trusted SSL CAs for connection with mysql client.
ssl-ca = ""
# Path of file that contains X509 certificate in PEM format for connection with mysql client.
ssl-cert = ""
# Path of file that contains X509 key in PEM format for connection with mysql client.
ssl-key = ""
# Path of file that contains list of trusted SSL CAs for connection with cluster components.
cluster-ssl-ca = ""
# Path of file that contains X509 certificate in PEM format for connection with cluster components.
cluster-ssl-cert = ""
# Path of file that contains X509 key in PEM format for connection with cluster components.
cluster-ssl-key = ""
[status]
# If enable status report HTTP service.
report-status = true
# TiDB status port.
status-port = 10080
# Prometheus pushgateway address, leaves it empty will disable prometheus push.
metrics-addr = ""
# Prometheus client push interval in second, set \"0\" to disable prometheus push.
metrics-interval = 0
[performance]
# Max CPUs to use, 0 use number of CPUs in the machine.
max-procs = 0
# StmtCountLimit limits the max count of statement inside a transaction.
stmt-count-limit = 5000
# Set keep alive option for tcp connection.
tcp-keep-alive = true
# The maximum number of retries when commit a transaction.
retry-limit = 10
# Whether support cartesian product.
cross-join = true
# Stats lease duration, which influences the time of analyze and stats load.
stats-lease = "3s"
# Run auto analyze worker on this tidb-server.
run-auto-analyze = true
# Probability to use the query feedback to update stats, 0 or 1 for always false/true.
feedback-probability = 0.0
# The max number of query feedback that cache in memory.
query-feedback-limit = 1024
# Pseudo stats will be used if the ratio between the modify count and
# row count in statistics of a table is greater than it.
pseudo-estimate-ratio = 0.7
[proxy-protocol]
# PROXY protocol acceptable client networks.
# Empty string means disable PROXY protocol, * means all networks.
networks = ""
# PROXY protocol header read timeout, unit is second
header-timeout = 5
[plan-cache]
enabled = false
capacity = 2560
shards = 256
[prepared-plan-cache]
enabled = false
capacity = 100
[opentracing]
# Enable opentracing.
enable = false
# Whether to enable the rpc metrics.
rpc-metrics = false
[opentracing.sampler]
# Type specifies the type of the sampler: const, probabilistic, rateLimiting, or remote
type = "const"
# Param is a value passed to the sampler.
# Valid values for Param field are:
# - for "const" sampler, 0 or 1 for always false/true respectively
# - for "probabilistic" sampler, a probability between 0 and 1
# - for "rateLimiting" sampler, the number of spans per second
# - for "remote" sampler, param is the same as for "probabilistic"
# and indicates the initial sampling rate before the actual one
# is received from the mothership
param = 1.0
# SamplingServerURL is the address of jaeger-agent's HTTP sampling server
sampling-server-url = ""
# MaxOperations is the maximum number of operations that the sampler
# will keep track of. If an operation is not tracked, a default probabilistic
# sampler will be used rather than the per operation specific sampler.
max-operations = 0
# SamplingRefreshInterval controls how often the remotely controlled sampler will poll
# jaeger-agent for the appropriate sampling strategy.
sampling-refresh-interval = 0
[opentracing.reporter]
# QueueSize controls how many spans the reporter can keep in memory before it starts dropping
# new spans. The queue is continuously drained by a background go-routine, as fast as spans
# can be sent out of process.
queue-size = 0
# BufferFlushInterval controls how often the buffer is force-flushed, even if it's not full.
# It is generally not useful, as it only matters for very low traffic services.
buffer-flush-interval = 0
# LogSpans, when true, enables LoggingReporter that runs in parallel with the main reporter
# and logs all submitted spans. Main Configuration.Logger must be initialized in the code
# for this option to have any effect.
log-spans = false
# LocalAgentHostPort instructs reporter to send spans to jaeger-agent at this address
local-agent-host-port = ""
[tikv-client]
# Max gRPC connections that will be established with each tikv-server.
grpc-connection-count = 16
# After a duration of this time in seconds if the client doesn't see any activity it pings
# the server to see if the transport is still alive.
grpc-keepalive-time = 10
# After having pinged for keepalive check, the client waits for a duration of Timeout in seconds
# and if no activity is seen even after that the connection is closed.
grpc-keepalive-timeout = 3
# max time for commit command, must be twice bigger than raft election timeout.
commit-timeout = "41s"
[binlog]
# Socket file to write binlog.
binlog-socket = ""
# WriteTimeout specifies how long it will wait for writing binlog to pump.
write-timeout = "15s"
# If IgnoreError is true, when writting binlog meets error, TiDB would stop writting binlog,
# but still provide service.
ignore-error = false

@ -0,0 +1,498 @@
# TiKV config template
# Human-readable big numbers:
# File size(based on byte): KB, MB, GB, TB, PB
# e.g.: 1_048_576 = "1MB"
# Time(based on ms): ms, s, m, h
# e.g.: 78_000 = "1.3m"
# log level: trace, debug, info, warn, error, off.
log-level = "error"
# file to store log, write to stderr if it's empty.
# log-file = ""
[readpool.storage]
# size of thread pool for high-priority operations
# high-concurrency = 4
# size of thread pool for normal-priority operations
# normal-concurrency = 4
# size of thread pool for low-priority operations
# low-concurrency = 4
# max running high-priority operations, reject if exceed
# max-tasks-high = 8000
# max running normal-priority operations, reject if exceed
# max-tasks-normal = 8000
# max running low-priority operations, reject if exceed
# max-tasks-low = 8000
# size of stack size for each thread pool
# stack-size = "10MB"
[readpool.coprocessor]
# Notice: if CPU_NUM > 8, default thread pool size for coprocessors
# will be set to CPU_NUM * 0.8.
# high-concurrency = 8
# normal-concurrency = 8
# low-concurrency = 8
# max-tasks-high = 16000
# max-tasks-normal = 16000
# max-tasks-low = 16000
# stack-size = "10MB"
[server]
# set listening address.
# addr = "127.0.0.1:20160"
# set advertise listening address for client communication, if not set, use addr instead.
# advertise-addr = ""
# notify capacity, 40960 is suitable for about 7000 regions.
# notify-capacity = 40960
# maximum number of messages can be processed in one tick.
# messages-per-tick = 4096
# compression type for grpc channel, available values are no, deflate and gzip.
# grpc-compression-type = "no"
# size of thread pool for grpc server.
# grpc-concurrency = 4
# The number of max concurrent streams/requests on a client connection.
# grpc-concurrent-stream = 1024
# The number of connections with each tikv server to send raft messages.
# grpc-raft-conn-num = 10
# Amount to read ahead on individual grpc streams.
# grpc-stream-initial-window-size = "2MB"
# How many snapshots can be sent concurrently.
# concurrent-send-snap-limit = 32
# How many snapshots can be recv concurrently.
# concurrent-recv-snap-limit = 32
# max count of tasks being handled, new tasks will be rejected.
# end-point-max-tasks = 2000
# max recursion level allowed when decoding dag expression
# end-point-recursion-limit = 1000
# max time to handle coprocessor request before timeout
# end-point-request-max-handle-duration = "60s"
# the max bytes that snapshot can be written to disk in one second,
# should be set based on your disk performance
# snap-max-write-bytes-per-sec = "100MB"
# set attributes about this server, e.g. { zone = "us-west-1", disk = "ssd" }.
# labels = {}
[storage]
# set the path to rocksdb directory.
# data-dir = "/tmp/tikv/store"
# notify capacity of scheduler's channel
# scheduler-notify-capacity = 10240
# maximum number of messages can be processed in one tick
# scheduler-messages-per-tick = 1024
# the number of slots in scheduler latches, concurrency control for write.
# scheduler-concurrency = 2048000
# scheduler's worker pool size, should increase it in heavy write cases,
# also should less than total cpu cores.
# scheduler-worker-pool-size = 4
# When the pending write bytes exceeds this threshold,
# the "scheduler too busy" error is displayed.
# scheduler-pending-write-threshold = "100MB"
[pd]
# pd endpoints
# endpoints = []
[metric]
# the Prometheus client push interval. Setting the value to 0s stops Prometheus client from pushing.
# interval = "15s"
interval = "0s"
# the Prometheus pushgateway address. Leaving it empty stops Prometheus client from pushing.
address = ""
# the Prometheus client push job name. Note: A node id will automatically append, e.g., "tikv_1".
# job = "tikv"
[raftstore]
# true (default value) for high reliability, this can prevent data loss when power failure.
# sync-log = true
# set the path to raftdb directory, default value is data-dir/raft
# raftdb-path = ""
# set store capacity, if no set, use disk capacity.
# capacity = 0
# notify capacity, 40960 is suitable for about 7000 regions.
# notify-capacity = 40960
# maximum number of messages can be processed in one tick.
# messages-per-tick = 4096
# Region heartbeat tick interval for reporting to pd.
# pd-heartbeat-tick-interval = "60s"
# Store heartbeat tick interval for reporting to pd.
# pd-store-heartbeat-tick-interval = "10s"
# When region size changes exceeds region-split-check-diff, we should check
# whether the region should be split or not.
# region-split-check-diff = "6MB"
# Interval to check region whether need to be split or not.
# split-region-check-tick-interval = "10s"
# When raft entry exceed the max size, reject to propose the entry.
# raft-entry-max-size = "8MB"
# Interval to gc unnecessary raft log.
# raft-log-gc-tick-interval = "10s"
# A threshold to gc stale raft log, must >= 1.
# raft-log-gc-threshold = 50
# When entry count exceed this value, gc will be forced trigger.
# raft-log-gc-count-limit = 72000
# When the approximate size of raft log entries exceed this value, gc will be forced trigger.
# It's recommanded to set it to 3/4 of region-split-size.
# raft-log-gc-size-limit = "72MB"
# When a peer hasn't been active for max-peer-down-duration,
# we will consider this peer to be down and report it to pd.
# max-peer-down-duration = "5m"
# Interval to check whether start manual compaction for a region,
# region-compact-check-interval = "5m"
# Number of regions for each time to check.
# region-compact-check-step = 100
# The minimum number of delete tombstones to trigger manual compaction.
# region-compact-min-tombstones = 10000
# Interval to check whether should start a manual compaction for lock column family,
# if written bytes reach lock-cf-compact-threshold for lock column family, will fire
# a manual compaction for lock column family.
# lock-cf-compact-interval = "10m"
# lock-cf-compact-bytes-threshold = "256MB"
# Interval (s) to check region whether the data are consistent.
# consistency-check-interval = 0
# Use delete range to drop a large number of continuous keys.
# use-delete-range = false
# delay time before deleting a stale peer
# clean-stale-peer-delay = "10m"
# Interval to cleanup import sst files.
# cleanup-import-sst-interval = "10m"
[coprocessor]
# When it is true, it will try to split a region with table prefix if
# that region crosses tables. It is recommended to turn off this option
# if there will be a large number of tables created.
# split-region-on-table = true
# When the region's size exceeds region-max-size, we will split the region
# into two which the left region's size will be region-split-size or a little
# bit smaller.
# region-max-size = "144MB"
# region-split-size = "96MB"
[rocksdb]
# Maximum number of concurrent background jobs (compactions and flushes)
# max-background-jobs = 8
# This value represents the maximum number of threads that will concurrently perform a
# compaction job by breaking it into multiple, smaller ones that are run simultaneously.
# Default: 1 (i.e. no subcompactions)
# max-sub-compactions = 1
# Number of open files that can be used by the DB. You may need to
# increase this if your database has a large working set. Value -1 means
# files opened are always kept open. You can estimate number of files based
# on target_file_size_base and target_file_size_multiplier for level-based
# compaction.
# If max-open-files = -1, RocksDB will prefetch index and filter blocks into
# block cache at startup, so if your database has a large working set, it will
# take several minutes to open the db.
max-open-files = 1024
# Max size of rocksdb's MANIFEST file.
# For detailed explanation please refer to https://github.com/facebook/rocksdb/wiki/MANIFEST
# max-manifest-file-size = "20MB"
# If true, the database will be created if it is missing.
# create-if-missing = true
# rocksdb wal recovery mode
# 0 : TolerateCorruptedTailRecords, tolerate incomplete record in trailing data on all logs;
# 1 : AbsoluteConsistency, We don't expect to find any corruption in the WAL;
# 2 : PointInTimeRecovery, Recover to point-in-time consistency;
# 3 : SkipAnyCorruptedRecords, Recovery after a disaster;
# wal-recovery-mode = 2
# rocksdb write-ahead logs dir path
# This specifies the absolute dir path for write-ahead logs (WAL).
# If it is empty, the log files will be in the same dir as data.
# When you set the path to rocksdb directory in memory like in /dev/shm, you may want to set
# wal-dir to a directory on a persistent storage.
# See https://github.com/facebook/rocksdb/wiki/How-to-persist-in-memory-RocksDB-database
# wal-dir = "/tmp/tikv/store"
# The following two fields affect how archived write-ahead logs will be deleted.
# 1. If both set to 0, logs will be deleted asap and will not get into the archive.
# 2. If wal-ttl-seconds is 0 and wal-size-limit is not 0,
# WAL files will be checked every 10 min and if total size is greater
# then wal-size-limit, they will be deleted starting with the
# earliest until size_limit is met. All empty files will be deleted.
# 3. If wal-ttl-seconds is not 0 and wal-size-limit is 0, then
# WAL files will be checked every wal-ttl-seconds / 2 and those that
# are older than wal-ttl-seconds will be deleted.
# 4. If both are not 0, WAL files will be checked every 10 min and both
# checks will be performed with ttl being first.
# When you set the path to rocksdb directory in memory like in /dev/shm, you may want to set
# wal-ttl-seconds to a value greater than 0 (like 86400) and backup your db on a regular basis.
# See https://github.com/facebook/rocksdb/wiki/How-to-persist-in-memory-RocksDB-database
# wal-ttl-seconds = 0
# wal-size-limit = 0
# rocksdb max total wal size
# max-total-wal-size = "4GB"
# Rocksdb Statistics provides cumulative stats over time.
# Turn statistics on will introduce about 5%-10% overhead for RocksDB,
# but it is worthy to know the internal status of RocksDB.
# enable-statistics = true
# Dump statistics periodically in information logs.
# Same as rocksdb's default value (10 min).
# stats-dump-period = "10m"
# Due to Rocksdb FAQ: https://github.com/facebook/rocksdb/wiki/RocksDB-FAQ,
# If you want to use rocksdb on multi disks or spinning disks, you should set value at
# least 2MB;
# compaction-readahead-size = 0
# This is the maximum buffer size that is used by WritableFileWrite
# writable-file-max-buffer-size = "1MB"
# Use O_DIRECT for both reads and writes in background flush and compactions
# use-direct-io-for-flush-and-compaction = false
# Limit the disk IO of compaction and flush. Compaction and flush can cause
# terrible spikes if they exceed a certain threshold. Consider setting this to
# 50% ~ 80% of the disk throughput for a more stable result. However, in heavy
# write workload, limiting compaction and flush speed can cause write stalls too.
# rate-bytes-per-sec = 0
# Enable or disable the pipelined write
# enable-pipelined-write = true
# Allows OS to incrementally sync files to disk while they are being
# written, asynchronously, in the background.
# bytes-per-sync = "0MB"
# Allows OS to incrementally sync WAL to disk while it is being written.
# wal-bytes-per-sync = "0KB"
# Specify the maximal size of the Rocksdb info log file. If the log file
# is larger than `max_log_file_size`, a new info log file will be created.
# If max_log_file_size == 0, all logs will be written to one log file.
# Default: 1GB
# info-log-max-size = "1GB"
# Time for the Rocksdb info log file to roll (in seconds).
# If specified with non-zero value, log file will be rolled
# if it has been active longer than `log_file_time_to_roll`.
# Default: 0 (disabled)
# info-log-roll-time = "0"
# Maximal Rocksdb info log files to be kept.
# Default: 10
# info-log-keep-log-file-num = 10
# This specifies the Rocksdb info LOG dir.
# If it is empty, the log files will be in the same dir as data.
# If it is non empty, the log files will be in the specified dir,
# and the db data dir's absolute path will be used as the log file
# name's prefix.
# Default: empty
# info-log-dir = ""
# Column Family default used to store actual data of the database.
[rocksdb.defaultcf]
# compression method (if any) is used to compress a block.
# no: kNoCompression
# snappy: kSnappyCompression
# zlib: kZlibCompression
# bzip2: kBZip2Compression
# lz4: kLZ4Compression
# lz4hc: kLZ4HCCompression
# zstd: kZSTD
# per level compression
# compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
# Approximate size of user data packed per block. Note that the
# block size specified here corresponds to uncompressed data.
# block-size = "64KB"
# If you're doing point lookups you definitely want to turn bloom filters on, We use
# bloom filters to avoid unnecessary disk reads. Default bits_per_key is 10, which
# yields ~1% false positive rate. Larger bits_per_key values will reduce false positive
# rate, but increase memory usage and space amplification.
# bloom-filter-bits-per-key = 10
# false means one sst file one bloom filter, true means evry block has a corresponding bloom filter
# block-based-bloom-filter = false
# level0-file-num-compaction-trigger = 4
# Soft limit on number of level-0 files. We start slowing down writes at this point.
# level0-slowdown-writes-trigger = 20
# Maximum number of level-0 files. We stop writes at this point.
# level0-stop-writes-trigger = 36
# Amount of data to build up in memory (backed by an unsorted log
# on disk) before converting to a sorted on-disk file.
# write-buffer-size = "128MB"
# The maximum number of write buffers that are built up in memory.
# max-write-buffer-number = 5
# The minimum number of write buffers that will be merged together
# before writing to storage.
# min-write-buffer-number-to-merge = 1
# Control maximum total data size for base level (level 1).
# max-bytes-for-level-base = "512MB"
# Target file size for compaction.
# target-file-size-base = "8MB"
# Max bytes for compaction.max_compaction_bytes
# max-compaction-bytes = "2GB"
# There are four different algorithms to pick files to compact.
# 0 : ByCompensatedSize
# 1 : OldestLargestSeqFirst
# 2 : OldestSmallestSeqFirst
# 3 : MinOverlappingRatio
# compaction-pri = 3
# block-cache used to cache uncompressed blocks, big block-cache can speed up read.
# in normal cases should tune to 30%-50% system's total memory.
# block-cache-size = "1GB"
# Indicating if we'd put index/filter blocks to the block cache.
# If not specified, each "table reader" object will pre-load index/filter block
# during table initialization.
# cache-index-and-filter-blocks = true
# Pin level0 filter and index blocks in cache.
# pin-l0-filter-and-index-blocks = true
# Enable read amplication statistics.
# value => memory usage (percentage of loaded blocks memory)
# 1 => 12.50 %
# 2 => 06.25 %
# 4 => 03.12 %
# 8 => 01.56 %
# 16 => 00.78 %
# read-amp-bytes-per-bit = 0
# Pick target size of each level dynamically.
# dynamic-level-bytes = true
# Options for Column Family write
# Column Family write used to store commit informations in MVCC model
[rocksdb.writecf]
# compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
# block-size = "64KB"
# write-buffer-size = "128MB"
# max-write-buffer-number = 5
# min-write-buffer-number-to-merge = 1
# max-bytes-for-level-base = "512MB"
# target-file-size-base = "8MB"
# in normal cases should tune to 10%-30% system's total memory.
# block-cache-size = "256MB"
# level0-file-num-compaction-trigger = 4
# level0-slowdown-writes-trigger = 20
# level0-stop-writes-trigger = 36
# cache-index-and-filter-blocks = true
# pin-l0-filter-and-index-blocks = true
# compaction-pri = 3
# read-amp-bytes-per-bit = 0
# dynamic-level-bytes = true
[rocksdb.lockcf]
# compression-per-level = ["no", "no", "no", "no", "no", "no", "no"]
# block-size = "16KB"
# write-buffer-size = "128MB"
# max-write-buffer-number = 5
# min-write-buffer-number-to-merge = 1
# max-bytes-for-level-base = "128MB"
# target-file-size-base = "8MB"
# block-cache-size = "256MB"
# level0-file-num-compaction-trigger = 1
# level0-slowdown-writes-trigger = 20
# level0-stop-writes-trigger = 36
# cache-index-and-filter-blocks = true
# pin-l0-filter-and-index-blocks = true
# compaction-pri = 0
# read-amp-bytes-per-bit = 0
# dynamic-level-bytes = true
[raftdb]
# max-sub-compactions = 1
max-open-files = 1024
# max-manifest-file-size = "20MB"
# create-if-missing = true
# enable-statistics = true
# stats-dump-period = "10m"
# compaction-readahead-size = 0
# writable-file-max-buffer-size = "1MB"
# use-direct-io-for-flush-and-compaction = false
# enable-pipelined-write = true
# allow-concurrent-memtable-write = false
# bytes-per-sync = "0MB"
# wal-bytes-per-sync = "0KB"
# info-log-max-size = "1GB"
# info-log-roll-time = "0"
# info-log-keep-log-file-num = 10
# info-log-dir = ""
[raftdb.defaultcf]
# compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
# block-size = "64KB"
# write-buffer-size = "128MB"
# max-write-buffer-number = 5
# min-write-buffer-number-to-merge = 1
# max-bytes-for-level-base = "512MB"
# target-file-size-base = "8MB"
# should tune to 256MB~2GB.
# block-cache-size = "256MB"
# level0-file-num-compaction-trigger = 4
# level0-slowdown-writes-trigger = 20
# level0-stop-writes-trigger = 36
# cache-index-and-filter-blocks = true
# pin-l0-filter-and-index-blocks = true
# compaction-pri = 0
# read-amp-bytes-per-bit = 0
# dynamic-level-bytes = true
[security]
# set the path for certificates. Empty string means disabling secure connectoins.
# ca-path = ""
# cert-path = ""
# key-path = ""
[import]
# the directory to store importing kv data.
# import-dir = "/tmp/tikv/import"
# number of threads to handle RPC requests.
# num-threads = 8
# stream channel window size, stream will be blocked on channel full.
# stream-channel-window = 128

@ -0,0 +1,173 @@
version: '2.1'
services:
pd0:
image: pingcap/pd:latest
ports:
- "2379"
volumes:
- ./config/pd.toml:/pd.toml:ro
- ./data:/data
- ./logs:/logs
command:
- --name=pd0
- --client-urls=http://0.0.0.0:2379
- --peer-urls=http://0.0.0.0:2380
- --advertise-client-urls=http://pd0:2379
- --advertise-peer-urls=http://pd0:2380
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
- --data-dir=/data/pd0
- --config=/pd.toml
- --log-file=/logs/pd0.log
restart: on-failure
pd1:
image: pingcap/pd:latest
ports:
- "2379"
volumes:
- ./config/pd.toml:/pd.toml:ro
- ./data:/data
- ./logs:/logs
command:
- --name=pd1
- --client-urls=http://0.0.0.0:2379
- --peer-urls=http://0.0.0.0:2380
- --advertise-client-urls=http://pd1:2379
- --advertise-peer-urls=http://pd1:2380
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
- --data-dir=/data/pd1
- --config=/pd.toml
- --log-file=/logs/pd1.log
restart: on-failure
pd2:
image: pingcap/pd:latest
ports:
- "2379"
volumes:
- ./config/pd.toml:/pd.toml:ro
- ./data:/data
- ./logs:/logs
command:
- --name=pd2
- --client-urls=http://0.0.0.0:2379
- --peer-urls=http://0.0.0.0:2380
- --advertise-client-urls=http://pd2:2379
- --advertise-peer-urls=http://pd2:2380
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
- --data-dir=/data/pd2
- --config=/pd.toml
- --log-file=/logs/pd2.log
restart: on-failure
tikv0:
image: pingcap/tikv:latest
volumes:
- ./config/tikv.toml:/tikv.toml:ro
- ./data:/data
- ./logs:/logs
command:
- --addr=0.0.0.0:20160
- --advertise-addr=tikv0:20160
- --data-dir=/data/tikv0
- --pd=pd0:2379,pd1:2379,pd2:2379
- --config=/tikv.toml
- --log-file=/logs/tikv0.log
depends_on:
- "pd0"
- "pd1"
- "pd2"
restart: on-failure
tikv1:
image: pingcap/tikv:latest
volumes:
- ./config/tikv.toml:/tikv.toml:ro
- ./data:/data
- ./logs:/logs
command:
- --addr=0.0.0.0:20160
- --advertise-addr=tikv1:20160
- --data-dir=/data/tikv1
- --pd=pd0:2379,pd1:2379,pd2:2379
- --config=/tikv.toml
- --log-file=/logs/tikv1.log
depends_on:
- "pd0"
- "pd1"
- "pd2"
restart: on-failure
tikv2:
image: pingcap/tikv:latest
volumes:
- ./config/tikv.toml:/tikv.toml:ro
- ./data:/data
- ./logs:/logs
command:
- --addr=0.0.0.0:20160
- --advertise-addr=tikv2:20160
- --data-dir=/data/tikv2
- --pd=pd0:2379,pd1:2379,pd2:2379
- --config=/tikv.toml
- --log-file=/logs/tikv2.log
depends_on:
- "pd0"
- "pd1"
- "pd2"
restart: on-failure
tidb:
image: pingcap/tidb:latest
ports:
- "4000:4000"
- "10080:10080"
volumes:
- ./config/tidb.toml:/tidb.toml:ro
- ./logs:/logs
command:
- --store=tikv
- --path=pd0:2379,pd1:2379,pd2:2379
- --config=/tidb.toml
- --log-file=/logs/tidb.log
- --advertise-address=tidb
depends_on:
- "tikv0"
- "tikv1"
- "tikv2"
restart: on-failure
tispark-master:
image: pingcap/tispark:latest
command:
- /opt/spark/sbin/start-master.sh
volumes:
- ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
environment:
SPARK_MASTER_PORT: 7077
SPARK_MASTER_WEBUI_PORT: 8080
ports:
- "7077:7077"
- "8080:8080"
depends_on:
- "tikv0"
- "tikv1"
- "tikv2"
restart: on-failure
tispark-slave0:
image: pingcap/tispark:latest
command:
- /opt/spark/sbin/start-slave.sh
- spark://tispark-master:7077
volumes:
- ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
environment:
SPARK_WORKER_WEBUI_PORT: 38081
ports:
- "38081:38081"
depends_on:
- tispark-master
restart: on-failure
tidb-vision:
image: pingcap/tidb-vision:latest
environment:
PD_ENDPOINT: pd0:2379
ports:
- "8010:8010"
restart: on-failure

@ -0,0 +1,14 @@
# TypeSense w/ Docker Compose
with default empty api key = 123
## Username and Password
```
set the enviroment
TYPESENSE_API_KEY=xyz
```
## Other docs
[typesense install](https://typesense.org/docs/0.21.0/guide/install-typesense.html#%F0%9F%8E%AC-start)

@ -0,0 +1,10 @@
version: '3.2'
services:
typesense:
image: typesense/typesense:0.21.0
command: ./typesense-server --data-dir=/data --api-key=123
volumes:
- ./typesense-data:/data
ports:
- "8108:8108"

@ -0,0 +1,13 @@
# YugaByteDB w/ Docker Compose
with default empty password
## Connecting with client
```
docker exec -it yb-tserver-n1 /home/yugabyte/bin/ysqlsh -h yb-tserver-n1
```
## Other docs
[yugabyte docker compose](https://docs.yugabyte.com/latest/deploy/docker/docker-compose/)

@ -0,0 +1,43 @@
version: '2'
volumes:
yb-master-data-1:
yb-tserver-data-1:
services:
yb-master:
image: yugabytedb/yugabyte:latest
container_name: yb-master-n1
volumes:
- yb-master-data-1:/mnt/master
command: [ "/home/yugabyte/bin/yb-master",
"--fs_data_dirs=/mnt/master",
"--master_addresses=yb-master-n1:7100",
"--rpc_bind_addresses=yb-master-n1:7100",
"--replication_factor=1"]
ports:
- "7000:7000"
environment:
SERVICE_7000_NAME: yb-master
yb-tserver:
image: yugabytedb/yugabyte:latest
container_name: yb-tserver-n1
volumes:
- yb-tserver-data-1:/mnt/tserver
command: [ "/home/yugabyte/bin/yb-tserver",
"--fs_data_dirs=/mnt/tserver",
"--start_pgsql_proxy",
"--rpc_bind_addresses=yb-tserver-n1:9100",
"--tserver_master_addrs=yb-master-n1:7100"]
ports:
- "9042:9042"
- "5433:5433"
- "9000:9000"
environment:
SERVICE_5433_NAME: ysql
SERVICE_9042_NAME: ycql
SERVICE_6379_NAME: yedis
SERVICE_9000_NAME: yb-tserver
depends_on:
- yb-master
Loading…
Cancel
Save