Loading…
OpenStack Kilo Design Summit has ended
This is the schedule for the Kilo Design Summit, where OpenStack contributors discuss the future of OpenStack development.
Click here for the main OpenStack Summit conference schedule.

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Cinder [clear filter]
Tuesday, November 4
 

16:40 CET

Gerrit third-party CI discussion
This discussion will focus on priorities third party operators have identified as needing further community input (https://etherpad.openstack.org/p/kilo-third-party-items) which have been discussed at the weekly third party meeting (https://wiki.openstack.org/wiki/Meetings/ThirdParty).

Tuesday November 4, 2014 16:40 - 17:20 CET
Derain
 
Wednesday, November 5
 

09:00 CET

Better Async Error Reporting in Cinder
Led by: Winston and Alex
A few 'special' create requests would by-pass scheduler in current Cinder work flow (i.e. API service directly sends requests to Volume service), such as creating snapshot, volume clone, volume from snaphot or volume replica. The intention for introducing such shortcut was because scheduler wasn't aware that such requests don't really need scheduling due to the fact that most storage backends doesn't support doing a snapshot of a volume that doesn't resides on it or create a clone of the volume on other backend. But in the cases where storage backend doesn't have enough capacity to serve the request, scheduler actually can be a good place to stop the request from going further.


Wednesday November 5, 2014 09:00 - 09:40 CET
Derain

09:50 CET

Cinder Automated discovery and storage config
Led by: Anjaneya "Reddy" Chagam
Currently admin enters storage information manually in cinder.conf. This is not a sustainable model in large enterprises that will have many different storage systems. Moreover it requires admin to have deep intelligence to identify storage system features and figure out best way to group them. Also solution providers would like to offer differentiated capabilities that admins can take advantage of during pool composition and volume provisioning.

* Propose adding discovery module with driver based framework with following changes:
** Database persistence for storage systems.
** REST APIs for storage systems, capabilities (create, update, delete, list operations).
** In order to accommodate legacy storage systems, ability to manually configure storage specs or additional behavior that admin deems useful.
** Changes to existing Cinder code to use database storage information instead of cinder.conf (backend portion only) e.g., volume startup, cinder-volume, scheduler changes.


Wednesday November 5, 2014 09:50 - 10:30 CET
Derain

11:00 CET

Extracting Brick from Cinder
Led by: Walter Boring
The idea of 'brick' was created back in the Havana timeframe. All along it was meant to be a standalone library that Cinder, Nova and any other project in OpenStack could use. Currently brick lives in a directory inside of Cinder, which means that only Cinder can use it.

We want to extract the brick directory and encapsulate it into it's own pypi library that any python project can use.

So we need to do:

* First create a separate python project and release it into pypi
* * Add brick to cinder's requirements.txt
* * Remove the existing cinder/brick directory
* * Everywhere in Cinder that uses cinder/brick will need to be modified to use
* the new pypi library.


Wednesday November 5, 2014 11:00 - 11:40 CET
Derain

11:50 CET

Cinder Rolling Upgrades
Led by: Duncan Thomas
The proposed change is multi-fold, and the implementation details are not 100% fleshed out yet.

1. All new RPC changes need to leave the old version in place and functional - this might mean inserting blank / default values into new fields, etc. How long the old version(s) need to live needs to be decided - I'd suggest at least a full stable release to enable upgrade from release to release.

2. When adding a new RPC, the code must behave sensibly if that RPC is not received on the far side - the state machine work might help with this since things can be timed out / retried. In some cases admin action might be required, e.g. state reset APIS. This will vary from change to change.

3. On startup, a manager must query the db and see if there are any RPC version requirements in the DB that apply to it. If the requirements cannot be met, then it should exit witha suitable log message. If they can be met, then the requirement should be cached to avoid having to query the DB again. An RPC can be added to cause the cache to be updated without restarting the manager, if desired, however that won't be in the initial version.

4. When sending an RPC, the manager should check the cache for the max version it is allowed to send, and use that. Again, this requires the code to inherently support this kind of fallback, which is a new requirement.

This should mean that an upgrade works as follows:

1. First signal (via a new RPC call or by restarting them) all managers to write their maximum supported RPC versions into the DB. This will be one record per RPC per manager.

2. Update one service. On startup, it works out the maximum RPC version it can currently send by looking at what other can receive and taking the minimum. It also updates the DB with what new versions it can handle, if any.

3. Rolling update more services. They update the DB as appropriate.

4. Either signal via RPC or restart all running services, they should now all see the new version RPCs are supported everywhere.


Wednesday November 5, 2014 11:50 - 12:30 CET
Derain

13:50 CET

Objectify Cinder
Led by: Thang Pham
By using objects, we have a standardized interface to the database, as well as standardize the data passed between cinder services over RPC. This separates the code from the actual database implementation, making it easier for rolling upgrades.


Wednesday November 5, 2014 13:50 - 14:30 CET
Derain

14:40 CET

Cinder State Machine
Concurrent resource access in cinder is a problem that has caused resource corruption when simultaneous resources are mutated on by multiple cinder entrypoints (api and manager for example). In Icehouse there has been some addition & usage of locks around manager functions to queue up those requests when a resource is being simultaneous worked on by multiple functions (this stops one of those operations from concurrently mutating the underlying resource). Sadly this is more of a *sledgehammer* approach and hides the symptoms of the problem and makes it non-obvious when debugging what other requests are queued up behind the lock (or why dead-locking is occurring, if and when it does).

To help alleviate and hopefully solve this problem we will try to attack some of these issues in a different manner, integrating a *allowed* state transition table into the ``create_volume` workflow and doing *strategic* state transitions and aborting/erroring out when these state transitions are not allowed. In the future this will help create a concrete set of well defined states and transitions for other workflows as well (and will make it clear while looking at code and during debugging which transitions are allowed at the same time and what transitions are actively occurring).


Wednesday November 5, 2014 14:40 - 15:20 CET
Derain

15:30 CET

Cinder Scheduler To Support Over subscription
Led by: Xing Yang & Eoghan Glynn
'infinite' and 'unknown' were initially used for capacity reporting for vendors using thin provisioning. They are now given the lowest weighing and are not the recommended way of reporting capacities. The proposal is to include virtual capacity in capacity reporting and allow over subscription for thin provisioning. Virtual capacity "provisioned_capacity_gb" was already part of Winston's spec https://review.openstack.org/#/c/105190/6/specs/juno/volume-statistics-reporting.rst. This proposal adds support for "over subscription" in thin provisioning and will be built on top of Winston's spec.


* Backend reports virtual capacity in get_volume_stats, in addition to total capacity and free capacity.
* Scheduler checks available virtual capacity and available real capacity.
* Over subscription ratio will be used to control virtual capacity allocation.
* Used ratio will be used to control real capacity usage.
*These two ratios should prevent over provisioning.
* Send notification if capacity reaches upper limit controlled by over subscription ratio or used ratio.
* cinder spec: https://review.openstack.org/#/c/129342/


Wednesday November 5, 2014 15:30 - 16:10 CET
Derain
 
Friday, November 7
 

09:00 CET

Cinder contributors meetup
The contributors meetup is a informal gathering of the project contributors, with an open agenda.

Friday November 7, 2014 09:00 - 12:30 CET
Manet