Diferencia entre revisiones de «File systems»
(→Automatic mounting at system startup =) |
(→Step 9: Resize virtual disk size of Ubuntu cloud image already imported into libvirt) |
||
(No se muestran 15 ediciones intermedias del mismo usuario) | |||
Línea 274: | Línea 274: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
− | = Step 6: Creation of | + | = Step 6: Creation of a RAID 1 (mirror) = |
− | In this | + | In this section we are going to set up a RAID 1 (mirror) which allows us to increase reliability. The idea is that if one of the disks in the RAID stops working, the data in the file system is still available so that we have some time to replace the defective disk. |
− | |||
− | == Step 6.1: | + | == Step 6.1: Adding disks to the virtual machine == |
− | + | As explained in Step 1, we are going to add 2 hard disks of 5GB each to a virtual machine. | |
+ | |||
+ | == Step 6.2: Installing the mdadm tool == | ||
+ | |||
+ | By default, this tool is installed on the ubuntu cloud server, but in case you use a different distribution | ||
+ | If you are using a different distribution, you will need to install it: | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
− | sudo apt install | + | sudo apt install mdadm |
</syntaxhighlight> | </syntaxhighlight> | ||
− | + | == Step 6.3: Using the mdadm tool == | |
+ | |||
+ | We start by typing the command, and then move on to explain the details: | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
− | sudo | + | sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/vdb1 /dev/vdc1 |
</syntaxhighlight> | </syntaxhighlight> | ||
− | + | We explain the parameters: | |
+ | |||
+ | # --create: to create the raid. | ||
+ | # --verbose: to show us more information about the process. | ||
+ | # /dev/md0: the name of the new raid, usually md0 is used. | ||
+ | # --level=1: creates a raid 1 which is the one we want to use | ||
+ | # --raid-devices=2: number of devices we are going to use, 2 in our case. | ||
+ | # /dev/vdb1: Name of partition 1 | ||
+ | # /dev/vdc1: Name of partition 2 | ||
+ | |||
+ | Once the command has been executed, it will create the RAID 1, let's now go on to see the | ||
+ | details: | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
− | + | sudo mdadm --detail /dev/md0 | |
</syntaxhighlight> | </syntaxhighlight> | ||
− | + | This will show us the RAID information, what we are interested in are. | |
+ | the last few lines: | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
− | + | Name : ubuntu:0 (local to host ubuntu) | |
− | + | UUID : e40ba520:5ed1bd37:5c818550:03a18368 | |
− | + | Events : 17 | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | Number Major Minor RaidDevice State | |
− | + | 0 252 17 0 active sync /dev/vdb1 | |
− | + | 1 252 33 1 active sync /dev/vdc1 | |
− | |||
− | |||
− | |||
− | |||
− | |||
</syntaxhighlight> | </syntaxhighlight> | ||
− | + | Here we see the UUID of our RAID, and we will also see in the last two lines, | |
+ | the two partitions, which are marked with active sync, this means that they are active and synchronized. | ||
+ | means that they are active and synchronized. Maybe if we do this command very fast, it will show something different | ||
+ | command, it will show something different because it takes a little while to synchronize the first time, but it will be soon. | ||
+ | time, but soon it will be. | ||
+ | |||
+ | == Step 6.4: Save the RAID configuration == | ||
+ | |||
+ | We are going to modify the configuration, we will have to edit the | ||
+ | file /etc/mdadm/mdadm.conf and under the line that says '# definitions of existing MD arrays' add the following | ||
+ | existing MD arrays', add the following: | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
− | / | + | ARRAY /dev/md0 UUID=e40ba520:5ed1bd37:5c818550:03a18368 |
</syntaxhighlight> | </syntaxhighlight> | ||
− | + | Remember that you can query the UUID with the command: | |
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | mdadm --detail /dev/md0 | ||
+ | </syntaxhighlight> | ||
− | + | Look for the UUID field. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | After updating /etc/mdadm/mdadm.conf, you have to invoke the following command: | |
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
− | + | update-initramfs -u | |
</syntaxhighlight> | </syntaxhighlight> | ||
− | + | This ensures that at the next boot the RAID uses the /dev/md0 drive. | |
− | + | == Step 6.5: Partition and format the RAID == | |
+ | |||
+ | As we explained in Steps 2, 3 and 4, we are going to partition the RAID disk: | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
− | + | lsblk # check the data, in my case it would be md0 | |
+ | sudo fdisk /dev/md0 | ||
</syntaxhighlight> | </syntaxhighlight> | ||
− | + | Now let's format the partition: | |
− | = | + | <syntaxhighlight lang="bash"> |
+ | sudo mkfs -t ext4 /dev/md0p1 | ||
+ | </syntaxhighlight> | ||
− | + | == Step 6.6: Mount the RAID == | |
− | |||
− | |||
− | + | As we have previously done, we will create a folder to mount the RAID to and mount it using the | |
− | mount the | + | mount it using the mount command: |
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
− | sudo | + | mkdir /home/ubuntu/data |
+ | sudo mount /dev/md0p1 /home/ubuntu/data | ||
+ | sudo chown -R ubuntu /home/ubuntu/data | ||
</syntaxhighlight> | </syntaxhighlight> | ||
− | + | So we have our RAID mounted and we can use it, let's try to write to it. | |
− | + | write to it. | |
− | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
− | + | touch /home/ubuntu/data/file | |
</syntaxhighlight> | </syntaxhighlight> | ||
− | + | This is working, but as we discussed in one of the previous steps, the | |
− | + | mount command will only mount the raid temporarily, to automate this mount at startup, we need to add the | |
+ | on boot, we will need to add the following line to /etc/fstab: | ||
− | + | <syntaxhighlight lang="bash"> | |
+ | /dev/md0p1 /home/ubuntu/data ext4 defaults | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | == Step 6.7: Check for fault tolerance == | ||
+ | |||
+ | Let's test that without a hard drive, everything is still working fine, since having a RAID 1 and having the | ||
+ | have a RAID 1 and have the data mirrored, there should be no problems. | ||
+ | |||
+ | The following command marks the /dev/sdb1 disk as damaged: | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
− | + | mdadm --fail /dev/md0 /dev/sdb1 | |
− | |||
− | |||
</syntaxhighlight> | </syntaxhighlight> | ||
− | + | We can see with the following command that the disk appears as ''faulty'': | |
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
− | + | mdadm --detail /dev/md0 | |
− | |||
</syntaxhighlight> | </syntaxhighlight> | ||
− | + | However, we can see that the file system is still mounted and the contents are still available. | |
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
− | + | ls /home/ubuntu/data/ | |
− | |||
</syntaxhighlight> | </syntaxhighlight> | ||
− | + | You can permanently remove the faulty disk with the command: | |
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
− | + | mdadm --remove /dev/md0 /dev/sdb1 | |
− | |||
</syntaxhighlight> | </syntaxhighlight> | ||
− | = Step | + | To include it again in the RAID: |
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | mdadm --add /dev/md0 /dev/sdb1 | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | You can check that it is synchronizing with the command: | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | mdadm --detail /dev/md0 | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | That will show the /dev/sdb1 disk in spare rebuilding state and will show the synchronization percentage. | ||
+ | |||
+ | = Step 7: Managing volumes (Logic Volume Manager, LVM) = | ||
Logic Volume Manager (LVM) is a software layer that allows you to create logical volumes and easily map them onto physical devices. | Logic Volume Manager (LVM) is a software layer that allows you to create logical volumes and easily map them onto physical devices. | ||
Línea 421: | Línea 459: | ||
* Logical volume (LV): They represent logical units created from previously created VG. As many LVs can be created as necessary for a VG. The creation of an LVM generates a special file in /dev, in the form /dev/group_name/logical_volume_name. The space mapping from an LV to a PV is configurable and can be: Linear, RAID, Cache, ... | * Logical volume (LV): They represent logical units created from previously created VG. As many LVs can be created as necessary for a VG. The creation of an LVM generates a special file in /dev, in the form /dev/group_name/logical_volume_name. The space mapping from an LV to a PV is configurable and can be: Linear, RAID, Cache, ... | ||
− | == Step | + | == Step 7.1: Creating LVM physical volume (PV) == |
To list the available storage units in the system, we use the following command: | To list the available storage units in the system, we use the following command: | ||
Línea 447: | Línea 485: | ||
pvremove /dev/sdb | pvremove /dev/sdb | ||
− | + | == Step 7.2: Creating LVM volume group (VG) == | |
To create a group, we use the vgcreate command: | To create a group, we use the vgcreate command: | ||
Línea 471: | Línea 509: | ||
vgscan | vgscan | ||
− | == Step | + | == Step 7.3: Creating a logical volume (LV) == |
To create a logical volume, we use the command: | To create a logical volume, we use the command: | ||
Línea 532: | Línea 570: | ||
to buy the new available size of only 500 MBytes. | to buy the new available size of only 500 MBytes. | ||
+ | |||
+ | = Step 8: GlusterFS = | ||
+ | |||
+ | GlusterFS is a scalable client-server network-attached storage file system. You can also check [https://docs.gluster.org/en/latest/Administrator-Guide/ official documentation] for more details. | ||
+ | |||
+ | A few concepts: | ||
+ | |||
+ | * Trusted Storage '''Pool''' (TSP): cluster formed by one or more nodes. | ||
+ | * Node: Server that provides storage space. | ||
+ | * Client: machine on which a volume is mounted | ||
+ | |||
+ | and regarding storage: | ||
+ | |||
+ | * Brick: Minimum unit of storage (given by a file system exported by a node). | ||
+ | * Gluster volume: logical unit composed of bricks. | ||
+ | |||
+ | == Step 8.1: Install GlusterFS server == | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | sudo apt install glusterfs-server | ||
+ | sudo systemctl enable glusterd | ||
+ | sudo systemctl start glusterd | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | You can check logs at: /var/log/glusterfs/glusterd.log | ||
+ | |||
+ | == Step 8.2: Managing the server pool == | ||
+ | |||
+ | Add a server to the ''pool'' with: | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | gluster peer probe node-name | ||
+ | </syntaxhightlight> | ||
+ | |||
+ | Check list of cluster nodes in the ''pool'': | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | gluster pool list | ||
+ | </syntaxhightlight> | ||
+ | |||
+ | Check the status of the nodes: | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | gluster peer status | ||
+ | </syntaxhightlight> | ||
+ | |||
+ | Remove a server from the ''pool'': | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | gluster peer detach node-name | ||
+ | </syntaxhightlight> | ||
+ | |||
+ | == Step 8.3: Managing storage == | ||
+ | |||
+ | On all servers in the cluster, create a brick, for example: | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | mkdir -p /data/glusterfs/myvol1/brick1 | ||
+ | mount /dev/vdb1 /data/glusterfs/myvol1/brick1 | ||
+ | </syntaxhightlight> | ||
+ | |||
+ | On '''one''' server, create the volume: | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | gluster volume create myvol1 node-name:/data/glusterfs/myvol1/brick1/brick | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | To list existing volumes: | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | gluster volume list | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | You can check the volume information and status: | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | gluster volume info [node-name] | ||
+ | gluster volume status [node-name] | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | You can also delete a volume: | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | gluster volume delete [node-name] | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | You can add more bricks to a given volume: | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | gluster volume add-brick [volume-name] [node-name]:/path/to/directory | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | gluster volume rebalance nombre | ||
+ | |||
+ | gluster volume remove-brick nombre brick [start/stop/status/commit] | ||
+ | |||
+ | Distributed volumes / replicated: | ||
+ | gluster volume create vGluster nodo1:/brick nodo2:/brick nodo3:/brick nodo4:/brick | ||
+ | gluster volume create vGluster replica 3 nodo1:/brick nodo2:/brick nodo3:/brick | ||
+ | |||
+ | https://wenhan.blog/post/glusterfs-failed-to-probe-a-cloned-peer/ | ||
+ | |||
+ | == Step 8.4: GlusterFS client == | ||
+ | |||
+ | apt install glusterfs-client | ||
+ | mount -t glusterfs -o _netdev server1:/vol0 /mnt | ||
= Step 9: Resize virtual disk size of Ubuntu cloud image already imported into libvirt = | = Step 9: Resize virtual disk size of Ubuntu cloud image already imported into libvirt = | ||
Línea 554: | Línea 698: | ||
sudo resize2fs /dev/sda1 +8G | sudo resize2fs /dev/sda1 +8G | ||
</syntaxhighlight> | </syntaxhighlight> | ||
+ | |||
+ | = Exercise = | ||
+ | |||
+ | # Deploy a cloud image on a VM with 4 disks of 1 GByte each. | ||
+ | # Configure a RAID 1+0 (two RAID 0 with two disks, and then one RAID 1 on top of the RAID 0). | ||
+ | # Create a logical volume using the 50% of the size available in the RAID. | ||
+ | # Create a ext4 filesystem on the logical volume. | ||
+ | # Extend the logical volume to the 100% size available in the RAID. | ||
+ | # Extend the filesystem to use the entire logical volume. |
Revisión actual del 10:39 26 ene 2022
Contenido
- 1 Step 1: Adding a disk to the virtual machine
- 2 Step 2: View disks in Linux
- 3 Step 3: Creating Linux partitions
- 4 Step 4: Format partition: creating a file system
- 5 Step 5: Mounting and unmounting partitions in Linux
- 6 Step 6: Creation of a RAID 1 (mirror)
- 7 Step 7: Managing volumes (Logic Volume Manager, LVM)
- 8 Step 8: GlusterFS
- 9 Step 9: Resize virtual disk size of Ubuntu cloud image already imported into libvirt
- 10 Exercise
Step 1: Adding a disk to the virtual machine
We are going to use any ubuntu cloud virtual machine that we have used previously and we are going to add two previously and we are going to add two virtual disks for testing.
- We open the window of the virtual machine to use.
- Move to View -> Details, and click on the 'Add hardware' button.
- We select Storage, we give it a size of 5GB and in the device type we select 'disk device'. With these options, we click Finish, and we will have our disk created.
- Repeat the previous step and create another 4GB disk.
Step 2: View disks in Linux
Let's go back to the console view (View -> Console) and start the machine (Virtual machine -> Run).
To check the disks added in step 1 let's use the lsblk command, which will show us an output similar to the following:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 2.2G 0 disk
├─vda1 252:1 0 2.1G 0 part /
├─vda14 252:14 0 4M 0 part
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 5G 0 disk
vdc 252:32 0 4G 0 disk
Here we see that we have 3 disks:
- vda: the current ubuntu disk we added when we created the machine.
- vdb: the disk we added in step 1 of 5GB.
- vdc: the disk that we added in step 1 of 4GB
We can also see that the vda disk has 3 partitions: vda1, vda14 and vda15
Step 3: Creating Linux partitions
We are now going to use our 5GB added disk, in the previous step we should have identified which one it is, in my case /dev/vdb, be sure which one is yours is yours and we will start partitioning the disk:
sudo fdisk /dev/vdb
If the disk is new and you haven't done anything previously, it usually comes without a partition table, so fdisk /dev/vdb </syntaxhighlight>. partition table, so fdisk takes care of creating one. message when we have executed the previous command. The partition table is a small part of the disk that is used to store the partition information. partition information, the format and whether a partition is executable or not. executable or not.
Once this is done, we will see that we are inside fdisk (software to partition a disk), and we will see that we are inside fdisk (software to partition a disk). partitioning a disk), and we will see that it has its own command line.
Now we will see the fdisk help and we will create a 3GB test partition
- Enter the letter 'm' and press Enter to get the list of possible fdisk commands.
- Enter 'n' and press Enter to create a new partition. You will be asked for several details:
- Partition type: press Enter and it will be assigned primary by default.
- Partition number: press Enter and it will be set to 1 by default.
- First sector: press Enter and it will be assigned by default.
- Last sector: we are going to create a 3GB partition, so we type +3G and press Enter.
- Our first partition is already created, although the changes have not yet been written to disk, for that, we will need to apply these changes, and we do it with the command 'w' and pressing Enter.
- Now we have our partition created. Let's check that the change is done, using lsblk for example, we should get something similar to the following:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 2.2G 0 disk
├─vda1 252:1 0 2.1G 0 part /
├─vda14 252:14 0 4M 0 part
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 5G 0 disk
└─vdb1 252:17 0 3G 0 part
vdc 252:32 0 4G 0 disk
Let's now create on the 4GB disk (vdc in my case) two partitions of 2GB each, and the 3GB partition created earlier, let's delete it:
- sudo fdisk /dev/vdc
- command 'n' and Enter
- All data by default, except the last sector, where we will put '+2G'.
- Once the first partition is created, before writing to disk, we are going to create the second partition, we repeat the previous steps. We are going to have a problem and when we get to the last step, it will tell us that 2GB is not possible. If we have a 4GB disk, why doesn't it allow us to create two 2GB partitions? Exactly, because of the partition table. What we will do in the last step will be to leave the default value, which will be the entire spare disk.
- Let's check before writing the changes, that everything is OK, command 'p' and Enter should show us an output similar to:
Device Boot Start End Sectors Size Id Type
/dev/vdc1 2048 4196351 4194304 2G 83 Linux
/dev/vdc2 4196352 4196351 4194304 2G 83 Linux
- If everything is correct, press 'w' and Enter and apply the changes.
- Now we will enter with the /dev/vdb disk to delete the partition: sudo fdisk /dev/vdb
- Now we are going to eliminate the partition, we press 'd' and Enter. As we only have one partition, it deletes it directly, in case we have more than one, it will ask us which one we want to delete.
- We apply the changes: 'w' and Enter.
- We check that everything has been to our liking with the command lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 2.2G 0 disk
├─vda1 252:1 0 2.1G 0 part /
├─vda14 252:14 0 4M 0 part
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 5G 0 disk
vdc 252:32 0 4G 0 disk
├─vdc1 252:33 0 2G 0 part
└─vdc2 252:34 0 2G 0 part
Step 4: Format partition: creating a file system
So far we have only defined in the partition table where each partition starts and ends. each partition begins and ends, but we will not be able to use it until we create a file system on the partition. file system on the partition. To create a file system we will use the command 'mkfs', and we will have several options when creating a file system one: ext2, ext3, ext4, btrfs, fat, ntfs, etc...
We create an ext4 system for /dev/vdc1 and a fat one for /dev/vdc2:
sudo mkfs -t ext4 /dev/vdc1
sudo mkfs -t fat /dev/vdc2
Once this is done, let's check that the changes have been applied properly:
sudo file -s /dev/vdc1
sudo file -s /dev/vdc2
In the output we should get that we have an ext4 file system and a FAT file system. another FAT.
Step 5: Mounting and unmounting partitions in Linux
After having created in the previous steps some partitions and formatting them, we are going to now we are going to mount them to be able to use them. In linux mounting a partition will mean that we will assign a system folder to a partition, and in it will be all the contents of the disk. will contain all the contents of the disk.
Mount and umount
We will start by using the mount and umount commands to mount and unmount partitions. partitions. We are going to mount the partitions /dev/vdc1 and /dev/vdc2, the first one we will mount in /home/ubunt1 and /dev/vdc2, the first one in /home/ubunt2. we will mount it in /home/ubuntu/part1 and the second one in /home/ubuntu/part2:
mkdir /home/ubuntu/part1 # We create the folder where we are going to mount the partition.
sudo mount /dev/vdc1 /home/ubuntu/part1 # mount partition
The same with /dev/vdc2
mkdir /home/ubuntu/part2
sudo mount /dev/vdc2 /home/ubuntu/part2
Let's see the contents of the mounted partitions:
ls /home/ubuntu/part1
ls /home/ubuntu/part2
We will see that we have a lost+found folder in the ext4 system, which is used for system errors. used for file system errors, when there is an error and we find an unreferenced file, it is we find an unreferenced file, it would be added inside this folder, and there would be the possibility to recover it.
Now we are going to create a new file inside our /dev/vdc1 partition, we are going to unmount it, and we are going to dismount it. we are going to unmount it, and we are going to mount it in a different directory:
sudo touch /home/ubuntu/part1/part1/newFile # we have to create it with sudo because we don't have permissions, then we will see this.
sudo umount /dev/vdc1
ls /home/ubuntu/part1 # the file is gone, the partition was unmounted
mkdir /home/ubuntu/part3
sudo mount /dev/vdc1 /home/ubuntu/part3
ls /home/ubuntu/part3
We will check that the file created in /home/ubuntu/part1 is now located in /home/ubuntu/part3, since actually, when we saved it, it was in the partition we mounted. partition we mounted.
We can also see that using the lsblk command, we will be able to observe where the partition is mounted. the partition is mounted.
Automatic mounting at system startup
Now we will see how to automate the mounting process every time the system is started, since it would be tedious to do it. system, since it would be tedious to have to mount all the partitions every time we turn off and turn on our machine. it would be tedious to have to mount all the partitions every time we turn off and on our machine.
The automated process is usually done inside the /etc/fstab file, let's look at. its contents:
cat /etc/fstab
Each row contains a mount of a partition, which contains:
- Partition ID: in this case a partition label is being used, but we can use anything that identifies the device, such as uuid or location (/dev/vdc1).
- Mount point: where the device will be mounted.
- File system: ext4, fat
- Options: here we will put the different mounting options, such as mount for read-only, give permissions to a user to use the partition, etc. Keep in mind that each system has its own options.
- backup: if it is set to zero, no backup will be made.
- check: if it is set to zero, no check is made at startup.
We are going to add some lines to automatically mount the partitions at startup:
Using the editor of your choice, add the following to the file /etc/fstab. Important to do it with sudo so that it allows us to write to the file:
/dev/vdc1 /home/ubuntu/part1 ext4 rw,user,exec 0 0
/dev/vdc2 /home/ubuntu/part2 vfat umask=000 0 0 0
Once the changes have been saved, let's apply them without rebooting, to test that it works correctly:
mount -a # apply the changes to fstab without reboot
lsblk # check that it is properly mounted.
Now let's check if our user has write permissions:
touch /home/ubuntu/part1
touch /home/ubuntu/part2
In the first one it will not let us and in the second one we will not have problems. This works this way because the ext4 systems, in order to have write permissions, we have to give it on the file system, while the fat system, in order to have write permissions, we have to give it on the file system, while the fat system has the umask option that already does the job, has the umask option that already does the job. To have write permissions with our user in the partition, we will have to give permission to the partition to be able to write to it, for example, the to be able to write to it, for example, using the chown command:
sudo chown ubuntu /home/ubuntu/part1
If we try it now, we will be able to write to the ext4 partition, and having given permissions, it will work every time:
touch /home/ubuntu/part1
Now, let's unmount everything and reboot the machine to check that it's all working.
sudo umount /dev/vdc1
sudo umount /home/ubuntu/part2 # This is another way to unmount, giving the mount point.
lsblk # check that they are not mounted.
We restart the machine (Virtual machine -> Shutdown -> Restart) and check:
lsblk
Step 6: Creation of a RAID 1 (mirror)
In this section we are going to set up a RAID 1 (mirror) which allows us to increase reliability. The idea is that if one of the disks in the RAID stops working, the data in the file system is still available so that we have some time to replace the defective disk.
Step 6.1: Adding disks to the virtual machine
As explained in Step 1, we are going to add 2 hard disks of 5GB each to a virtual machine.
Step 6.2: Installing the mdadm tool
By default, this tool is installed on the ubuntu cloud server, but in case you use a different distribution If you are using a different distribution, you will need to install it:
sudo apt install mdadm
Step 6.3: Using the mdadm tool
We start by typing the command, and then move on to explain the details:
sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/vdb1 /dev/vdc1
We explain the parameters:
- --create: to create the raid.
- --verbose: to show us more information about the process.
- /dev/md0: the name of the new raid, usually md0 is used.
- --level=1: creates a raid 1 which is the one we want to use
- --raid-devices=2: number of devices we are going to use, 2 in our case.
- /dev/vdb1: Name of partition 1
- /dev/vdc1: Name of partition 2
Once the command has been executed, it will create the RAID 1, let's now go on to see the details:
sudo mdadm --detail /dev/md0
This will show us the RAID information, what we are interested in are. the last few lines:
Name : ubuntu:0 (local to host ubuntu)
UUID : e40ba520:5ed1bd37:5c818550:03a18368
Events : 17
Number Major Minor RaidDevice State
0 252 17 0 active sync /dev/vdb1
1 252 33 1 active sync /dev/vdc1
Here we see the UUID of our RAID, and we will also see in the last two lines, the two partitions, which are marked with active sync, this means that they are active and synchronized. means that they are active and synchronized. Maybe if we do this command very fast, it will show something different command, it will show something different because it takes a little while to synchronize the first time, but it will be soon. time, but soon it will be.
Step 6.4: Save the RAID configuration
We are going to modify the configuration, we will have to edit the file /etc/mdadm/mdadm.conf and under the line that says '# definitions of existing MD arrays' add the following existing MD arrays', add the following:
ARRAY /dev/md0 UUID=e40ba520:5ed1bd37:5c818550:03a18368
Remember that you can query the UUID with the command:
mdadm --detail /dev/md0
Look for the UUID field.
After updating /etc/mdadm/mdadm.conf, you have to invoke the following command:
update-initramfs -u
This ensures that at the next boot the RAID uses the /dev/md0 drive.
Step 6.5: Partition and format the RAID
As we explained in Steps 2, 3 and 4, we are going to partition the RAID disk:
lsblk # check the data, in my case it would be md0
sudo fdisk /dev/md0
Now let's format the partition:
sudo mkfs -t ext4 /dev/md0p1
Step 6.6: Mount the RAID
As we have previously done, we will create a folder to mount the RAID to and mount it using the mount it using the mount command:
mkdir /home/ubuntu/data
sudo mount /dev/md0p1 /home/ubuntu/data
sudo chown -R ubuntu /home/ubuntu/data
So we have our RAID mounted and we can use it, let's try to write to it. write to it.
touch /home/ubuntu/data/file
This is working, but as we discussed in one of the previous steps, the mount command will only mount the raid temporarily, to automate this mount at startup, we need to add the on boot, we will need to add the following line to /etc/fstab:
/dev/md0p1 /home/ubuntu/data ext4 defaults
Step 6.7: Check for fault tolerance
Let's test that without a hard drive, everything is still working fine, since having a RAID 1 and having the have a RAID 1 and have the data mirrored, there should be no problems.
The following command marks the /dev/sdb1 disk as damaged:
mdadm --fail /dev/md0 /dev/sdb1
We can see with the following command that the disk appears as faulty:
mdadm --detail /dev/md0
However, we can see that the file system is still mounted and the contents are still available.
ls /home/ubuntu/data/
You can permanently remove the faulty disk with the command:
mdadm --remove /dev/md0 /dev/sdb1
To include it again in the RAID:
mdadm --add /dev/md0 /dev/sdb1
You can check that it is synchronizing with the command:
mdadm --detail /dev/md0
That will show the /dev/sdb1 disk in spare rebuilding state and will show the synchronization percentage.
Step 7: Managing volumes (Logic Volume Manager, LVM)
Logic Volume Manager (LVM) is a software layer that allows you to create logical volumes and easily map them onto physical devices.
The installation of LVM is easy with the command:
sudo apt-get install lvm2
LVM management is based on three basic concepts:
- Physical Volumes (PV): represents a storage unit that provisions storage space for the logical volume we are going to create.
- Volume Group (VG): Represents a storage pool for LVM. A VG will be composed of several PVs, being able to have as many VGs as needed.
- Logical volume (LV): They represent logical units created from previously created VG. As many LVs can be created as necessary for a VG. The creation of an LVM generates a special file in /dev, in the form /dev/group_name/logical_volume_name. The space mapping from an LV to a PV is configurable and can be: Linear, RAID, Cache, ...
Step 7.1: Creating LVM physical volume (PV)
To list the available storage units in the system, we use the following command:
lsblk
In virtualbox we can create new storage units and add them to the virtual machine.
To create a physical volume on the /dev/sdb drive, we use the following command:
pvcreate /dev/sdb
Remember that the /dev/sdb drive must be unused.
To view the existing physical volumes, we use the command:
pvscan
For more information:
pvdisplay
To remove a PV, for example /dev/sdb:
pvremove /dev/sdb
Step 7.2: Creating LVM volume group (VG)
To create a group, we use the vgcreate command:
vgcreate vg_test /dev/sdb /dev/sdc.
This adds the sdb and sdc volumes to the 'vg_test' group, making the group capacity the aggregate capacity of the added PVs.
To remove a vgremove group:
vgremove vg_test
To extend a created group (e.g. vg_test) with more PVs (e.g. /dev/sde) we use the vgextend command:
vgextend vg_test vg_test /dev/sde.
To reduce the capacity of a created group (e.g. vg_test) just use the vgreduce command indicating the unit (PV) to remove, e.g. /dev/sde:
vgreduce vg_test /dev/sde
To display all existing volume groups
vgscan
Step 7.3: Creating a logical volume (LV)
To create a logical volume, we use the command:
lvcreate --name volume1 --size 100MB vg_test
From this point on there is a drive that appears as /dev/mapper/vg_test-volume1.
We can now format the logical volume:
mkfs.ext4 /dev/vg_test/vg_test/volume1
and mount it to store data:
mount /dev/vg_test/volume1 /mnt
You can extend a logical volume by 1 Gbyte more:
lvextend --size +1GB /dev/vg_test/volume1
Right after that you have to resize the file system:
resize2fs /dev/vg_test/test/volume1.
You can check with:
df -h
that the file system in volume1 now occupies the entire logical volume.
Reducing the size of a logical volume is a bit more complicated, you could lose data if not done correctly!
First, we unmount the volume:
umount /mnt/volume1
to reduce the size of the file system, we check the integrity of the file system:
e2fsck -f /dev/vg_test/volume1
And we resize it (reduce in size):
resize2fs /dev/vg_test/volume1 500M
Now, lastly, you can reduce the size of the volume:
lvreduce --size 500M /dev/vg_test/volume1
To check that everything went well, resize the file system again so that it occupies all the available space.
resize2fs /dev/vg_test/volume1
And you can mount the file system again:
mount /dev/vg_test/test/volume1 /mnt/volume1
with
df -h
to buy the new available size of only 500 MBytes.
Step 8: GlusterFS
GlusterFS is a scalable client-server network-attached storage file system. You can also check official documentation for more details.
A few concepts:
- Trusted Storage Pool (TSP): cluster formed by one or more nodes.
- Node: Server that provides storage space.
- Client: machine on which a volume is mounted
and regarding storage:
- Brick: Minimum unit of storage (given by a file system exported by a node).
- Gluster volume: logical unit composed of bricks.
Step 8.1: Install GlusterFS server
sudo apt install glusterfs-server
sudo systemctl enable glusterd
sudo systemctl start glusterd
You can check logs at: /var/log/glusterfs/glusterd.log
Step 8.2: Managing the server pool
Add a server to the pool with:
gluster peer probe node-name
</syntaxhightlight>
Check list of cluster nodes in the ''pool'':
<syntaxhighlight lang="bash">
gluster pool list
</syntaxhightlight>
Check the status of the nodes:
<syntaxhighlight lang="bash">
gluster peer status
</syntaxhightlight>
Remove a server from the ''pool'':
<syntaxhighlight lang="bash">
gluster peer detach node-name
</syntaxhightlight>
== Step 8.3: Managing storage ==
On all servers in the cluster, create a brick, for example:
<syntaxhighlight lang="bash">
mkdir -p /data/glusterfs/myvol1/brick1
mount /dev/vdb1 /data/glusterfs/myvol1/brick1
</syntaxhightlight>
On '''one''' server, create the volume:
<syntaxhighlight lang="bash">
gluster volume create myvol1 node-name:/data/glusterfs/myvol1/brick1/brick
To list existing volumes:
gluster volume list
You can check the volume information and status:
gluster volume info [node-name]
gluster volume status [node-name]
You can also delete a volume:
gluster volume delete [node-name]
You can add more bricks to a given volume:
gluster volume add-brick [volume-name] [node-name]:/path/to/directory
gluster volume rebalance nombre
gluster volume remove-brick nombre brick [start/stop/status/commit]
Distributed volumes / replicated: gluster volume create vGluster nodo1:/brick nodo2:/brick nodo3:/brick nodo4:/brick gluster volume create vGluster replica 3 nodo1:/brick nodo2:/brick nodo3:/brick
https://wenhan.blog/post/glusterfs-failed-to-probe-a-cloned-peer/
Step 8.4: GlusterFS client
apt install glusterfs-client mount -t glusterfs -o _netdev server1:/vol0 /mnt
Step 9: Resize virtual disk size of Ubuntu cloud image already imported into libvirt
With the virtual machine off, from the host:
qemu-img resize ubuntu-18.04-server-cloudimg-arm64.img +8G
To give you 8 GBytes more space.
Now, from the virtual machine, we increase the partition size:
sudo growpart /dev/sda 1
And then we resize the file system:
sudo resize2fs /dev/sda1 +8G
Exercise
- Deploy a cloud image on a VM with 4 disks of 1 GByte each.
- Configure a RAID 1+0 (two RAID 0 with two disks, and then one RAID 1 on top of the RAID 0).
- Create a logical volume using the 50% of the size available in the RAID.
- Create a ext4 filesystem on the logical volume.
- Extend the logical volume to the 100% size available in the RAID.
- Extend the filesystem to use the entire logical volume.