SUSE Support

Here When You Need Us

Cinder volume already mounted but not visible.

This document (7022590) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Openstack Cloud 7
SUSE Linux Enterprise Server SP3


Situation

In a SUSE Openstack Cloud 7 with pacemaker barclamp, and SUSE Linux Enterprise Server SP3 with NFS Server environment, a cinder volume is mounted, whereas the mount command does not reveal the cinder-volume as currently mounted.

This function return volume cinder already mounted while a mount command dos not show this.
/usr/lib/python2.7/site-packages/os_brick/remotefs/remotefs.py

and in : /var/log/cinder/cinder-volume.log the following is observed :
2018-01-15 13:21:37.967 23904 INFO os_brick.remotefs.remotefs [req-89e01f56-f90b-47cf-aee8-f9638ce5ef83 - - - - -] Already mounted: /var/lib/cinder/mnt/
Note :
- the cinder-volume role is deployed on the cluster, using a NFS share as the back-end.
- a mount command doesn't show  the cinder resource mounted :
root@d0c-c4-7a-d2-88-ea:~ # mount | grep nfs
10.0.0.5:/srv/nfs/glance on /glance type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.0.5,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.0.0.5)
root@d0c-c4-7a-d2-88-ea:~ #

- An attempt to create a cinder volume will fail :
root@d0c-c4-7a-d2-88-ea:~ # cinder create --name testvolume 1

root@d0c-c4-7a-d2-88-ea:~ # cinder show testvolume2
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2018-01-19T14:18:39.370185           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 3bfc347f-35d3-4a1a-ba20-afdcc6321b96 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | testvolume                           |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 30d0a619bb76450fb398435af81fbab6     |
| replication_status             | disabled                             |
| size                           | 1                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | error                                |
| updated_at                     | 2018-01-19T14:18:40.141022           |
| user_id                        | e1c8eee635864e2f9e87100f74e4bc60     |
| volume_type                    | None                                 |
+--------------------------------+--------------------------------------+
root@d0c-c4-7a-d2-88-ea:~ #

Resolution

The permissions were not properly set on the NFS server, and a 'chown' command using the cinder's uid & gid  will correct the permission problem.

To verify the current  uid & gid :

ssh controller01
Last login: Thu Jan 25 01:55:52 2018 from 192.168.124.10
root@d52-54-00-63-a1-01:~ # id cinder
uid=193(cinder) gid=480(cinder) groups=480(cinder)
root@d52-54-00-63-a1-01:~ #


run the following 'chown' command
nfsserv:~ # chown 193:480 /export/cinder

And finally restart the resource :

systemctl restart openstack-cinder-volume.service

Cause

The NFS mount is not visible here, because a cinder-volume runs in its own MNT namespace. As such, from the cinder-volume namespace perspective, the NFS mount is really visible, although it is not visible to the 'root' user  (since 'root' runs in a different name space).

The 'nsenter' command can show this :
nsenter -t `pgrep cinder-volume | head -1` -m df -h
Filesystem                     Size  Used Avail Use% Mounted on
/dev/sda3                       38G  2.6G   33G   8% /
devtmpfs                       2.7G     0  2.7G   0% /dev
tmpfs                          2.7G   54M  2.7G   2% /dev/shm
tmpfs                          2.7G     0  2.7G   0% /sys/fs/cgroup
tmpfs                          2.7G   18M  2.7G   1% /run
tmpfs                          547M     0  547M   0% /run/user/196
/dev/sdb                      1014M   36M  979M   4% /var/lib/rabbitmq
192.168.124.62:/export/cinder   15G  3.1G   12G  22% /var/lib/cinder/mnt/cfecea6f4744f85a0a802c9d4ed2b8d1
tmpfs                          547M     0  547M   0% /run/user/0

Additional Information


Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7022590
  • Creation Date: 23-Jan-2018
  • Modified Date:03-Mar-2020
    • SUSE Linux Enterprise High Availability Extension
    • SUSE Linux Enterprise Server
    • SUSE Open Stack Cloud

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.