Commit 523e0a0a authored by Vincent Pelletier's avatar Vincent Pelletier

Document neo.conf.

Extend README to cover installation process more.


git-svn-id: https://svn.erp5.org/repos/neo/branches/prototype3@1256 71dcc9de-d417-0410-9af5-da40c76e7ee4
parent 0474ba5b
This is the second prototype of NEO. The design is described in:
http://www.nexedi.org/workspaces/neo
NEO is a distributed, redundant and scalable implementation of ZODB API.
NEO stands for Nexedi Enterprise Object.
Requirements
- Linux 2.6 or later
- Python 2.4 or later
- ctypes http://python.net/crew/theller/ctypes/
......@@ -12,14 +13,67 @@ Requirements
- Zope 2.8 or later
Overview
A NEO cluster is composed of the following types of nodes:
- "master" nodes (mandatory, 1 or more)
Take care of transactionality. Only one master node is really active
(the active master node is called "primary master") at any given time,
extra masters are spares (they are called "secondary masters").
- "storage" nodes (mandatory, 1 or more)
Stores data in a MySQL database. All available storage nodes are in use
simultaneously. This offers redundancy and data distribution.
- "admin" nodes (mandatory for startup, optional after)
Accepts commands from neoctl tool and transmits them to the
primary master, and monitors cluster state.
- "client" nodes
Well... Something needing to store/load data in a NEO cluster.
Disclaimer
In addition of the disclaimer contained in the licence this code is
released under, please consider the following.
NEO does not implement any authentication mechanism between its nodes, and
does not encrypts data exchanged between nodes either.
If you want to protect your cluster from malicious nodes, or your data from
being snooped, please consider encrypted tunelling (such as openvpn).
Installation
a. Make neo directory available for python to import (for example, by
adding its container directory to the PYTHONPATH environment variable).
b. Create a configuration file for your cluster. You should have received
a self-describing configuration file along with the code, named
neo.conf .
c. Start all required nodes :
neomaster -c <your_neo.conf> -s <master_section_name>
neostorage -c <your_neo.conf> -s <storage_section_name>
neomaster -c <your_neo.conf>
d. Tell the cluster it can provide service.
neoctl -a <ip:port of admin node> start cluster
This must be done each time the primary master changes. This will be
addressed in a future release.
How to use
1. In zope:
a. Copy neo directory to /path/to/your/zope/lib/python
b. Edit your zope.conf, remove the `zodb_zb` section that refers to
FileStorage, and replace it with :
b. Edit your zope.conf, add a neo import and edit the `zodb_db` section by
replacing its filestorage subsection by a NEOStorage one.
It should look like :
%import neo
<zodb_db main>
......@@ -43,6 +97,8 @@ Installation
b. Just create the storage object and play with it:
from neo.client.Storare import Storage
s = Storage(master_nodes="127.0.0.1:10010", name="main",
connector="SocketConnector")
s = Storage(master_nodes="127.0.0.1:10010", name="main")
...
"name" and "master_nodes" parameters have the same meaning as in
configuration file.
# Note: Unless otherwise noted, all parameters in this configuration file
# must be identical for all nodes in a given cluster.
# Default parameters.
[DEFAULT]
# The cluster name
# This must be set.
# It must be a name unique to a given cluster, to prevent foreign
# misconfigured nodes from interfering.
name:
# The list of master nodes.
# The list of master nodes
# Master nodes not not in this list will be rejected by the cluster.
# This list should be identical for all nodes in a given cluster for
# maximum availability.
master_nodes: 127.0.0.1:10010 127.0.0.1:10011
# Partition table configuration
# Data in the cluster is distributed among nodes using a partition table, which
# has the following parameters.
# Replicas: How many copies of a partition should exist at a time.
# 0 means no redundancy
# 1 means there is a spare copy of all partitions
replicas: 1
# Partitions: How data spreads among storage nodes. This number must be at
# least equal to the number of storage nodes the cluster contains.
# IMPORTANT: This must not be changed once the cluster contains data.
partitions: 20
# The database credentials
# MySQL credentials
# MySQL username and password used by storage nodes to access data.
# This user must have the right to drop & create tables in his database
# (see storage node configuration sections).
# This can be overriden in individual storage configurations.
user: neo
password: neo
# The type of connection among nodes
connector: SocketConnector
# Individual nodes parameters
# Some parameters makes no sense to be defined in [DEFAULT] section.
# They are:
# server: The ip:port the node will listen on.
# database: Storage nodes only. The MySQL database to use to store data.
# Those database must be created manualy.
# Admin node
[admin]
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment