A clone of a previously used virtual machine should be created. All VM specific information, such as user accounts, shell histories and SSH host keys should be removed from the cloned disk image. Which of the following tools can perform these tasks?
virc-reset
virt-sparsi
virt-rescue
virt-svspre
sysprep
vire-wipe
Sysprep is a tool that removes all your personal account and security information, and then prepares the machine to be used as an image. It is supported by Windows and some Linux distributions. It can also remove drivers and other machine-specific settings. Sysprep is required when creating a managed image outside of a gallery in Azure
https://learn.microsoft.com/en-us/azure/virtual-machines/generalize
Which of the following values would be valid in the FROM statement in aDockerfile?
ubuntu:focal
docker://ubuntu: focal
registry:ubuntu:focal
file:/tmp/ubuntu/Dockerfile
http://docker.example.com/images/ubuntu-focal.iso
The FROM statement in a Dockerfile specifies the base image from which the subsequent instructions are executed1. The value of the FROM statement can be either an image name, an image name with a tag, or an image ID1. The image name can be either a repository name or a repository name with a registry prefix2. For example, ubuntu is a repository name, and docker.io/ubuntu is a repository name with a registry prefix2. The tag is an optional identifier that can be used to specify a particular version or variant of an image1. For example, ubuntu:focal refers to the image with the focal tag in the ubuntu repository2. The image ID is a unique identifier that is automatically generated when an image is built or pulled1. For example, sha256:9b0dafaadb1cd1d14e4db51bd0f4c0d56b6b551b2982b2b7c637ca143ad605d2 is an image ID3.
Therefore, the only valid value in the FROM statement among the given options is ubuntu:focal, which is an image name with a tag. The other options are invalid because:
References:
What is the purpose of thekubeletservice in Kubernetes?
Provide a command line interface to manage Kubernetes.
Build a container image as specified in a Dockerfile.
Manage permissions of users when interacting with the Kubernetes API.
Run containers on the worker nodes according to the Kubernetes configuration.
Store and replicate Kubernetes configuration data.
The purpose of the kubelet service in Kubernetes is to run containers on the worker nodes according to the Kubernetes configuration. The kubelet is an agent or program that runs on each node and communicates with the Kubernetes control plane. It receives a set of PodSpecs that describe the desired state of the pods that should be running on the node, and ensures that the containers described in those PodSpecs are running and healthy. The kubelet also reports the status of the node and the pods back to the control plane. The kubelet does not manage containers that were not created by Kubernetes. References:
Which file format is used by libvirt to store configuration data?
INI-style text files
SQLite databases
XML files
Java-like properties files
Text files containing key/value pairs
Libvirt uses XML files to store configuration data for objects in the libvirt API, such as domains, networks, storage, etc. This allows for ease of extension in future releases and validation of documents prior to usage. Libvirt does not use any of the other file formats listed in the question. References:
How can data be shared between several virtual machines running on the same Linux-based host system?
By writing data to the file system since all virtual machines on the same host system use the same file system.
By mounting other virtual machines' file systems from /dev/virt-disks/remote/.
By setting up a ramdisk in one virtual machine and mounting it using its UUID in the other VMs.
By using a network file system or file transfer protocol.
By attaching the same virtual hard disk to all virtual machines and activating EXT4 sharing extensions on it.
The correct way to share data between several virtual machines running on the same Linux-based host system is by using a network file system or file transfer protocol. A network file system (NFS) is a distributed file system protocol that allows a user on a client computer to access files over a network in a manner similar to how local storage is accessed1. A file transfer protocol (FTP) is a standard network protocol used for the transfer of computer files between a client and server on a computer network2. Both methods allow data to be shared between virtual machines regardless of their underlying file systems or virtualization technologies. The other options are incorrect because they either do not work or are not feasible. Option A is wrong because each virtual machine has its own file system that is not directly accessible by other virtual machines. Option B is wrong because there is no such device as /dev/virt-disks/remote/ that can be used to mount other virtual machines’ file systems. Option C is wrong because a ramdisk is a volatile storage device that is not suitable for sharing data between virtual machines. Option E is wrong because attaching the same virtual hard disk to multiple virtual machines can cause data corruption and conflicts, and EXT4 does not have any sharing extensions that can prevent this. References:https://kb.vmware.com/s/article/1012706
https://bing.com/search?q=data+sharing+between+virtual+machines
Which directory is used bycloud-initto store status information and configuration information retrieved from external sources?
/var/lib/cloud/
/etc/cloud-init/cache/
/proc/sys/cloud/
/tmp/.cloud/
/opt/cloud/var/
cloud-init uses the /var/lib/cloud/ directory to store status information and configuration information retrieved from external sources, such as the cloud platform’smetadata service or user data files. The directory contains subdirectories for different types of data, such as instance, data, handlers, scripts, and sem. The instance subdirectory contains information specific to the current instance, such as the instance ID, the user data, and the cloud-init configuration. The data subdirectory contains information about the data sources that cloud-init detected and used. The handlers subdirectory contains information about the handlers that cloud-init executed. The scripts subdirectory contains scripts that cloud-init runs at different stages of the boot process, such as per-instance, per-boot, per-once, and vendor. The sem subdirectory contains semaphore files that cloud-init uses to track the execution status of different modules and stages. References:
Which of the following commands lists all differences between the disk images vm1-snap.img and vm1.img?
virt-delta -a vm1-snap.img -A vm1.img
virt-cp-in -a vm1-snap.img -A vm1.img
virt-cmp -a vm1-snap.img -A vm1.img
virt-history -a vm1-snap.img -A vm1.img
virt-diff -a vm1-snap.img -A vm1.img
The virt-diff command-line tool can be used to list the differences between files in two virtual machines or disk images. The output shows the changes to a virtual machine’s disk images after it has been running. The command can also be used to show the difference between overlays1. To specify two guests, you have to use the -a or -d option for the first guest, and the -A or -D option for the second guest. For example: virt-diff -a old.img -A new.img1. Therefore, the correct command to list all differences between the disk images vm1-snap.img and vm1.img is: virt-diff -a vm1-snap.img -A vm1.img. The other commands are not related to finding differences between disk images. virt-delta is a tool to create delta disks from two disk images2. virt-cp-in is a tool to copy files and directories into a virtual machine disk image3. virt-cmp is a tool to compare two files or directories in a virtual machine disk image4. virt-history is a tool to show the history of a virtual machine disk image5. References:
FILL BLANK
What LXC command lists containers sorted by their CPU, block I/O or memory consumption? (Specify ONLY the command without any path or parameters.)
lxc-top
LXD supports the following network interface types for containers: macvlan, bridged, physical, sriov, and ovn1. Macvlan creates a virtual interface on the host that is connected to the same network as the parent interface2. Bridged connects the container to a network bridge that acts as a virtual switch3. Physical attaches the container to a physical network interface on the host2. Ipsec and wifi are not valid network interface types for LXD containers. References:
What is the purpose of capabilities in the context of container virtualization?
Map potentially dangerous system calls to an emulation layer provided by the container virtualization.
Restrict the disk space a container can consume.
Enable memory deduplication to cache files which exist in multiple containers.
Allow regular users to start containers with elevated permissions.
Prevent processes from performing actions which might infringe the container.
Capabilities are a way of implementing fine-grained access control in Linux. They are a set of flags that define the privileges that a process can have. By default, a process inherits the capabilities of its parent, but some capabilities can be dropped or added by the process itself or by the kernel. In the context of container virtualization, capabilities are used to prevent processes from performing actions that might infringe the container, such as accessing the host’s devices, mounting filesystems, changing the system time, or killing other processes. Capabilities allow containers to run with a reduced set of privileges, enhancing the security and isolation of the container environment. For example, Docker uses a default set of capabilities that are granted to the processes running inside a container, and allows users to add or drop capabilities as needed12. References:
The commandvirsh vol-list vmsreturns the following error:
error: failed to get pool 'vms'
error: Storage pool not found: no storage pool with matching name 'vms '
Given that the directory/vmsexists, which of the following commands resolves this issue?
dd if=/dev/zero of=/vms bs=1 count=0 flags=name:vms
libvirt-poolctl new –-name=/vms –-type=dir –-path=/vms
qemu-img pool vms:/vms
virsh pool-create-as vms dir --target /vms
touch /vms/.libvirtpool
The command virsh pool-create-as vms dir --target /vms creates and starts a transient storage pool named vms of type dir with the target directory /vms12. This command resolves the issue of the storage pool not found error, as it makes the existing directory /vms visible to libvirt as a storage pool. The other commands are invalid because:
References:
Which file in acgroupdirectory contains the list of processes belonging to thiscgroup?
pids
members
procs
casks
subjects
The file procs in a cgroup directory contains the list of processes belonging to this cgroup. Each line in the file shows the PID of a process that is a member of the cgroup. A process can be moved to a cgroup by writing its PID into the cgroup’s procs file. For example, to move the process with PID 24982 to the cgroup cg1, the following command can be used: echo 24982 > /sys/fs/cgroup/cg1/procs1. The file procs is different from the file tasks, which lists the threads belonging to the cgroup. The file procs can be used to move all threads in a thread group at once, while the file tasks can be used to move individual threads2. References:
Which of the following mechanisms are used by LXC and Docker to create containers? (Choose three.)
Linux Capabilities
Kernel Namespaces
Control Groups
POSIXACLs
File System Permissions
LXC and Docker are both container technologies that use Linux kernel features to create isolated environments for running applications. The main mechanisms that they use are:
POSIX ACLs and file system permissions are not mechanisms used by LXC and Docker to create containers. They are methods of controlling the access to files and directories on a file system, which can be applied to any process, not just containers.
References:
Which of the following statements are true regarding VirtualBox?
It is a hypervisor designed as a special kernel that is booted before the first regular operating system starts.
It only supports Linux as a guest operating system and cannot run Windows inside a virtual machine.
It requires dedicated shared storage, as it cannot store virtual machine disk images locally on block devices of the virtualization host.
It provides both a graphical user interface and command line tools to administer virtual machines.
It is available for Linux only and requires the source code of the currently running Linux kernel to be available.
VirtualBox is a hosted hypervisor, which means it runs as an application on top of an existing operating system, not as a special kernel that is booted before the first regular operating system starts1. VirtualBox supports a large number of guest operating systems, including Windows, Linux, Solaris, OS/2, and OpenBSD1. VirtualBox does not require dedicated shared storage, as it can store virtual machine disk images locally on block devices of the virtualization host, or on network shares, or on iSCSI targets1. VirtualBox provides both a graphical user interface (GUI) and command line tools (VBoxManage) to administer virtual machines1. VirtualBox is available for Windows, Linux, macOS, and Solaris hosts1, and does not require the source code of the currently running Linux kernel to be available. References:
After setting up a data container using the following command:
docker create -v /data --name datastore debian /bin/true
how is an additional new container started which shares the/datavolume with the datastore container?
docker run --share-with datastore --name service debian bash
docker run -v datastore:/data --name service debian bash
docker run --volumes-from datastore --name service debian bash
docker run -v /data --name service debian bash
docker run --volume-backend datastore -v /data --name service debian bash
The correct way to start a new container that shares the /data volume with the datastore container is to use the --volumes-from flag. This flag mounts all the defined volumes from the referenced containers. In this case, the datastore container has a volume named /data, which is mounted in the service container at the same path. The other options are incorrect because they either use invalid flags, such as --share-with or --volume-backend, or they create new volumes instead of sharing the existing one, such as -v datastore:/data or -v /data. References:
Which of the following statements are true about sparse images in the context of virtual machine storage? (Choose two.)
Sparse images are automatically shrunk when files within the image are deleted.
Sparse images may consume an amount of space different from their nominal size.
Sparse images can only be used in conjunction with paravirtualization.
Sparse images allocate backend storage at the first usage of a block.
Sparse images are automatically resized when their maximum capacity is about to be exceeded.
Sparse images are a type of virtual disk images that grow in size as data is written to them, but do not shrink when data is deleted from them. Sparse images may consume an amount of space different from their nominal size, which is the maximum size that the image can grow to. For example, a sparse image with a nominal size of 100 GB may only take up 20 GB of physical storage if only 20 GB of data is written to it. Sparse images allocate backend storage at the first usage of a block, which means that the physical storage is only used when the virtual machine actually writes data to a block. This can save storage space and improve performance, as the image does not need to be pre-allocated or zeroed out.
Sparse images are not automatically shrunk when files within the image are deleted, because the virtual machine does not inform the host system about the freed blocks. To reclaim the unused space, a special tool such as virt-sparsify1 or qemu-img2 must be used to compact the image. Sparse images can be used with both full virtualization and paravirtualization, as the type of virtualization does not affect the format of the disk image. Sparse images are not automatically resized when their maximum capacity is about to be exceeded, because this would require changing the partition table and the filesystem of the image, which is not a trivial task. To resize a sparse image, a tool such as virt-resize3 or qemu-img2 must be used to increase the nominal size and the filesystem size of the image. References: 1 (search for “virt-sparsify”), 2 (search for “qemu-img”), 3 (search for “virt-resize”).
What is the default provider of Vagrant?
lxc
hyperv
virtualbox
vmware_workstation
docker
Vagrant is a tool that allows users to create and configure lightweight, reproducible, and portable development environments. Vagrant supports multiple providers, which are the backends that Vagrant uses to create and manage the virtual machines. By default, VirtualBox is the default provider for Vagrant. VirtualBox is still the most accessible platform to use Vagrant: it is free, cross-platform, and has been supported by Vagrant for years. With VirtualBox as the default provider, it provides the lowest friction for new users to get started with Vagrant. However, users can also use other providers, such as VMware, Hyper-V, Docker, or LXC, depending on their preferences and needs. To use another provider, users must install it as a Vagrant plugin and specify it when running Vagrant commands. Users can also change the default provider by setting the VAGRANT_DEFAULT_PROVIDER environmental variable. References:
Which of the following kinds of data cancloud-initprocess directly from user-data? (Choose three.)
Shell scripts to execute
Lists of URLs to import
ISO images to boot from
cloud-config declarations in YAML
Base64-encoded binary files to execute
Cloud-init is a tool that allows users to customize the configuration and behavior of cloud instances during the boot process. Cloud-init can process different kinds of data that are passed to the instance via user-data, which is a mechanism provided by various cloud providers to inject data into the instance. Among the kinds of data that cloud-init can process directly from user-data are:
The other kinds of data listed in the question are not directly processed by cloud-init from user-data. They are either not supported, not recommended, or require additional steps to be processed. These kinds of data are:
References:
Which of the following statements are true regarding resource management for full virtualization? (Choose two.)
The hygervisor may provide fine-grained limits to internal elements of the guest operating system such as the number of processes.
The hypervisor provides each virtual machine with hardware of a defined capacity that limits the resources of the virtual machine.
Full virtualization cannot pose any limits to virtual machines and always assigns the host system's resources in a first-come-first-serve manner.
All processes created within the virtual machines are transparently and equally scheduled in the host system for CPU and I/O usage.
It is up to the virtual machine to use its assigned hardware resources and create, for example, an arbitrary amount of network sockets.
Resource management for full virtualization is the process of allocating and controlling the physical resources of the host system to the virtual machines running on it. The hypervisor is the software layer that performs this task, by providing each virtual machine with a virtual hardware of a defined capacity that limits the resources of the virtual machine. For example, the hypervisor can specify how many virtual CPUs, how much memory, and how much disk space each virtual machine can use. The hypervisor can also enforce resource isolation and prioritization among the virtual machines, to ensure that they do not interfere with each other or consume more resources than they are allowed to. The hypervisor cannot provide fine-grained limits to internal elements of the guest operating system, such as the number of processes, because the hypervisor does not have access to the internal state of the guest operating system. The guest operating system is responsible for managing its own resources within the virtual hardware provided by the hypervisor. For example, the guest operating system can create an arbitrary amount of network sockets, as long as it does not exceed the network bandwidth allocated by the hypervisor. Full virtualization can pose limits to virtual machines, and does not always assign the host system’s resources in a first-come-first-serve manner. The hypervisor can use various resource management techniques, such as reservation, limit, share, weight, and quota, to allocate and control the resources of the virtual machines. The hypervisor can also use resource scheduling algorithms, such as round-robin, fair-share, or priority-based, to distribute the resources among the virtual machines according to their needs and preferences. All processes created within the virtual machines are not transparently and equally scheduled in the host system for CPU and I/O usage. The hypervisor can use different scheduling policies, such as proportional-share, co-scheduling, or gang scheduling, to schedule the virtual CPUs of the virtual machines on the physical CPUs of the host system. The hypervisor can alsouse different I/O scheduling algorithms, such as deadline, anticipatory, or completely fair queuing, to schedule the I/O requests of the virtual machines on the physical I/O devices of the host system. The hypervisor can also use different resource accounting and monitoring mechanisms, such as cgroups, perf, or sar, to measure and report the resource consumption and performance of the virtual machines. References: