This is on the maps-warper instance.
This is related to T102414 in that I was attempting to move the postgresql data files from the full partition to /mnt to free up some space as it keeps on filling up.
(Note that I added the labs::lvm::srv role but all the extra space I saw was in /mnt rather than in /srv as expected)
So.
Prior to creating new directories and moving files on /mnt, the filesystem was reporting errors in syslog.
```
Sep 14 12:47:17 maps-warper kernel: [2669113.413301] EXT3-fs error (device vdb): ext3_valid_block_bitmap: Invalid block bitmap - block_group = 273, block = 8945664
Sep 14 12:47:18 maps-warper kernel: [2669114.618342] EXT3-fs error (device vdb): ext3_valid_block_bitmap: Invalid block bitmap - block_group = 274, block = 8978432
Sep 14 13:08:41 maps-warper kernel: [2670397.830058] journal_bmap: journal block not found at offset 1036 on vdb
Sep 14 13:08:41 maps-warper kernel: [2670397.830515] Aborting journal on device vdb.
Sep 14 13:08:53 maps-warper kernel: [2670409.957811] EXT3-fs (vdb): error: ext3_journal_start_sb: Detected aborted journal
Sep 14 13:08:53 maps-warper kernel: [2670409.961066] EXT3-fs (vdb): error: remounting filesystem read-only
```
After I moved the files, the filesystem was turned into read-only. However it showed as (rw) when I used "mount". Sorry I don't have a log of this as scrollback was wiped when I rebooted.
I was able to copy back the postgresql data directory to it's original location and everything worked ok. Nothing else was on /mnt apart from this, and empty "lost and found" and "keys" directories.
Today I rebooted the instance, but the partition and filesystem is missing when I use "df -h" and "mount"
```
chippy@maps-warper:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 3.8G 3.5G 97M 98% /
udev 2.0G 8.0K 2.0G 1% /dev
tmpfs 396M 268K 396M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 2.0G 0 2.0G 0% /run/shm
labstore.svc.eqiad.wmnet:/project/maps/project 6.0T 2.4T 3.6T 41% /data/project
labstore.svc.eqiad.wmnet:/scratch 984G 437G 497G 47% /data/scratch
labstore1003.eqiad.wmnet:/dumps 44T 13T 31T 30% /public/dumps
labstore.svc.eqiad.wmnet:/project/maps/home 6.0T 2.4T 3.6T 41% /home
```
I think, but I can't be sure but it seems as if the amount in use in /vda1 was increased by around 20mb than before the reboot.
```
chippy@maps-warper:~$ mount
/dev/vda1 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
labstore.svc.eqiad.wmnet:/project/maps/project on /data/project type nfs (rw,noatime,vers=4,bg,hard,intr,sec=sys,proto=tcp,port=0,nofsc,addr=10.64.37.10,clientaddr=10.68.17.33)
labstore.svc.eqiad.wmnet:/scratch on /data/scratch type nfs (rw,noatime,vers=4,bg,hard,intr,sec=sys,proto=tcp,port=0,nofsc,addr=10.64.37.10,clientaddr=10.68.17.33)
labstore1003.eqiad.wmnet:/dumps on /public/dumps type nfs (ro,noatime,vers=4,bg,hard,intr,sec=sys,proto=tcp,port=0,nofsc,addr=10.64.4.10,clientaddr=10.68.17.33)
labstore.svc.eqiad.wmnet:/project/maps/home on /home type nfs (rw,noatime,vers=4,bg,hard,intr,sec=sys,proto=tcp,port=0,nofsc,addr=10.64.37.10,clientaddr=10.68.17.33)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw)
```
The instance appears to be working normally and serving the application ok.
Your help is appreciated in advance. Ideally this instance could do with more space somehow.