Join the Community
and take part in the story

Configure a rawx service

  • Replace UID with an unused ID in your puppet file (use an incremental integer for exemple)
  • Replace port in the rawx block and in the rdir block with unused ones on your server

This is a horrible work on adding hard drives.


I can’t see newly added rawx after puppet apply.


Can someone explain why openio --oio-ns=OPENIO cluster unlockall is required to exec and when ?


Hello @yongsheng

If you run openio cluster list --oio-ns OPENIO, you will see the list of scored services on your cluster, aswell as their score. The score is an internal metric used to determine the health of a particular service in order to decide whether it should be chosen to accept a request.

Now, these scores can be locked to any value between 0-100, either because the service is newly registered (locked to 0), it was locked by an administrator, or in some other specific circumstances.

Thus the command openio --oio-ns=OPENIO cluster unlockall will release the scores on all the services on your cluster, so that they can evolve “naturally” according to your machine’s resources and load and guide the decision making capabilities of the internal load balancers.

Simply, when you see a locked service in “cluster list” while you want it to actually accept requests you should unlock it. (either by unlockall or specifically targeting it using openio cluster unlock --oio-ns OPENIO [service_type] [service_ip]:[service_port].


Hi vladimir-openio,

  1. I’m able to add rawx into cluster. How about removing a rawx? Should I just remove its items from pp scripts?

  2. How about adding a node or sds? Should I just add its IP into zookeeper_url and sentinel_hosts in puppet scripts and then apply?

  3. How about removing a node or sds? Should i just remove its IP from zookeeper_url and sentinel_hosts in puppet scripts and then apply?

  4. How do I know the data in cluster have reached balanced (i mean complete copies or ec) after removing a rawx so that it’s safe for me to remove next one?

Best regards,


Hello @yongsheng:

  1. Decommissioning a rawx service is not so simple, it requires you to lock the service, move all the data from it onto another rawx service, link the new rawx with another rdir, and unlink the old one, then eventually remove the service manually from SDS (stop and remove from gridinit, remove watchers in the conscienceagent, restart conscienceagent, flush the conscience, maybe also restart the oio-proxy). It is highly recommended not to reduce the number of services, as it could break you data security requirements, so typically you would rather move a rawx to another location, not just decommission it.

  2. Adding a node doesn’t require you to deploy all the services on a new node. Just make sure that the new node has an oioproxy, a conscienceagent, an eventagent + beanstalk, any number of rawx/meta2 services (rawx comes with its blob indexer and its rdir). Scaling other services can be done aswell but is less straight-forward, so I won’t describe it here. From there, compose your puppet file, apply it, restart conscienceagent, re-run a volume admin boostrap, and unlock scores once the services are registered into the conscience. Make sure that the class {'openiosds':} and the namespace declarations are present, and are propely filled in.

  3. The difficulty in removing a node depends on the services you have on it. The more critical services (ZK or meta0), the trickier it is to move them. Again usually, one would move services to another node instead of just removing it. You’ll need to do it service by service which usually involves invoking the right mover/rebuilder scripts to move all data elsewhere, re-running some bootstrap operations (e.g. meta0, which can also be moved as a database), clearing the cache at all levels, and updating all required references and configuration. On a production environment, this operation can be very time consuming, but when done right, you will experience no loss in SLA whatsoever during migration.

  4. When you remove a rawx you have to rebuild it somewhere else onto another rawx. It is safe to remove the rawx once the rebuild is done. Again we do not encourage shrinking your nodes, as it is detrimental to your data security, and will most probably result in an imbalanced cluster, and a loss in performance.