- Home
- LPI
- LPIC Level 3
- 305-300
- 305-300 - LPIC-3: Virtualization and Containerization - Exam 305, version 3.0
LPI 305-300 LPIC-3: Virtualization and Containerization - Exam 305, version 3.0 Exam Practice Test
LPIC-3: Virtualization and Containerization - Exam 305, version 3.0 Questions and Answers
A clone of a previously used virtual machine should be created. All VM specific information, such as user accounts, shell histories and SSH host keys should be removed from the cloned disk image. Which of the following tools can perform these tasks?
Options:
virc-reset
virt-sparsi
virt-rescue
virt-svspre
sysprep
vire-wipe
Answer:
EExplanation:
Sysprep is a tool that removes all your personal account and security information, and then prepares the machine to be used as an image. It is supported by Windows and some Linux distributions. It can also remove drivers and other machine-specific settings. Sysprep is required when creating a managed image outside of a gallery in Azure
https://learn.microsoft.com/en-us/azure/virtual-machines/generalize
Which of the following values would be valid in the FROM statement in aDockerfile?
Options:
ubuntu:focal
docker://ubuntu: focal
registry:ubuntu:focal
file:/tmp/ubuntu/Dockerfile
http://docker.example.com/images/ubuntu-focal.iso
Answer:
AExplanation:
The FROM statement in a Dockerfile specifies the base image from which the subsequent instructions are executed1. The value of the FROM statement can be either an image name, an image name with a tag, or an image ID1. The image name can be either a repository name or a repository name with a registry prefix2. For example, ubuntu is a repository name, and docker.io/ubuntu is a repository name with a registry prefix2. The tag is an optional identifier that can be used to specify a particular version or variant of an image1. For example, ubuntu:focal refers to the image with the focal tag in the ubuntu repository2. The image ID is a unique identifier that is automatically generated when an image is built or pulled1. For example, sha256:9b0dafaadb1cd1d14e4db51bd0f4c0d56b6b551b2982b2b7c637ca143ad605d2 is an image ID3.
Therefore, the only valid value in the FROM statement among the given options is ubuntu:focal, which is an image name with a tag. The other options are invalid because:
- docker://ubuntu:focal is not a valid image name format. The docker:// prefix is used to specify a transport protocol, not a registry prefix4.
- registry:ubuntu:focal is not a valid image name format. The registry prefix should be a valid hostname or IP address, not a generic term2.
- file:/tmp/ubuntu/Dockerfile is not a valid image name format. The file: prefix is used to specify a local file path, not an image name5.
- http://docker.example.com/images/ubuntu-focal.iso is not a valid image name format. The http:// prefix is used to specify a web URL, not an image name5.
References:
- 1: Dockerfile reference | Docker Docs
- 2: docker - Using FROM statement in dockerfile - Stack Overflow
- 3: How to get the image id from a docker image - Stack Overflow
- 4: skopeo - Docker Registry v2 API tool - Linux Man Pages (1)
- 5: How to build a Docker image from a local Dockerfile? - Stack Overflow
What is the purpose of thekubeletservice in Kubernetes?
Options:
Provide a command line interface to manage Kubernetes.
Build a container image as specified in a Dockerfile.
Manage permissions of users when interacting with the Kubernetes API.
Run containers on the worker nodes according to the Kubernetes configuration.
Store and replicate Kubernetes configuration data.
Answer:
DExplanation:
The purpose of the kubelet service in Kubernetes is to run containers on the worker nodes according to the Kubernetes configuration. The kubelet is an agent or program that runs on each node and communicates with the Kubernetes control plane. It receives a set of PodSpecs that describe the desired state of the pods that should be running on the node, and ensures that the containers described in those PodSpecs are running and healthy. The kubelet also reports the status of the node and the pods back to the control plane. The kubelet does not manage containers that were not created by Kubernetes. References:
- Kubernetes Docs - kubelet
- Learn Steps - What is kubelet and what it does: Basics on Kubernetes
Which file format is used by libvirt to store configuration data?
Options:
INI-style text files
SQLite databases
XML files
Java-like properties files
Text files containing key/value pairs
Answer:
CExplanation:
Libvirt uses XML files to store configuration data for objects in the libvirt API, such as domains, networks, storage, etc. This allows for ease of extension in future releases and validation of documents prior to usage. Libvirt does not use any of the other file formats listed in the question. References:
- libvirt: XML Format
- LPIC-3 Virtualization and Containerization: Topic 305.1: Virtualization Concepts and Theory
How can data be shared between several virtual machines running on the same Linux-based host system?
Options:
By writing data to the file system since all virtual machines on the same host system use the same file system.
By mounting other virtual machines' file systems from /dev/virt-disks/remote/.
By setting up a ramdisk in one virtual machine and mounting it using its UUID in the other VMs.
By using a network file system or file transfer protocol.
By attaching the same virtual hard disk to all virtual machines and activating EXT4 sharing extensions on it.
Answer:
DExplanation:
The correct way to share data between several virtual machines running on the same Linux-based host system is by using a network file system or file transfer protocol. A network file system (NFS) is a distributed file system protocol that allows a user on a client computer to access files over a network in a manner similar to how local storage is accessed1. A file transfer protocol (FTP) is a standard network protocol used for the transfer of computer files between a client and server on a computer network2. Both methods allow data to be shared between virtual machines regardless of their underlying file systems or virtualization technologies. The other options are incorrect because they either do not work or are not feasible. Option A is wrong because each virtual machine has its own file system that is not directly accessible by other virtual machines. Option B is wrong because there is no such device as /dev/virt-disks/remote/ that can be used to mount other virtual machines’ file systems. Option C is wrong because a ramdisk is a volatile storage device that is not suitable for sharing data between virtual machines. Option E is wrong because attaching the same virtual hard disk to multiple virtual machines can cause data corruption and conflicts, and EXT4 does not have any sharing extensions that can prevent this. References:https://kb.vmware.com/s/article/1012706
https://bing.com/search?q=data+sharing+between+virtual+machines
Which directory is used bycloud-initto store status information and configuration information retrieved from external sources?
Options:
/var/lib/cloud/
/etc/cloud-init/cache/
/proc/sys/cloud/
/tmp/.cloud/
/opt/cloud/var/
Answer:
AExplanation:
cloud-init uses the /var/lib/cloud/ directory to store status information and configuration information retrieved from external sources, such as the cloud platform’smetadata service or user data files. The directory contains subdirectories for different types of data, such as instance, data, handlers, scripts, and sem. The instance subdirectory contains information specific to the current instance, such as the instance ID, the user data, and the cloud-init configuration. The data subdirectory contains information about the data sources that cloud-init detected and used. The handlers subdirectory contains information about the handlers that cloud-init executed. The scripts subdirectory contains scripts that cloud-init runs at different stages of the boot process, such as per-instance, per-boot, per-once, and vendor. The sem subdirectory contains semaphore files that cloud-init uses to track the execution status of different modules and stages. References:
- Configuring and managing cloud-init for RHEL 8 - Red Hat Customer Portal
- vsphere - what is the linux file location where the cloud-init user …
Which of the following commands lists all differences between the disk images vm1-snap.img and vm1.img?
Options:
virt-delta -a vm1-snap.img -A vm1.img
virt-cp-in -a vm1-snap.img -A vm1.img
virt-cmp -a vm1-snap.img -A vm1.img
virt-history -a vm1-snap.img -A vm1.img
virt-diff -a vm1-snap.img -A vm1.img
Answer:
EExplanation:
The virt-diff command-line tool can be used to list the differences between files in two virtual machines or disk images. The output shows the changes to a virtual machine’s disk images after it has been running. The command can also be used to show the difference between overlays1. To specify two guests, you have to use the -a or -d option for the first guest, and the -A or -D option for the second guest. For example: virt-diff -a old.img -A new.img1. Therefore, the correct command to list all differences between the disk images vm1-snap.img and vm1.img is: virt-diff -a vm1-snap.img -A vm1.img. The other commands are not related to finding differences between disk images. virt-delta is a tool to create delta disks from two disk images2. virt-cp-in is a tool to copy files and directories into a virtual machine disk image3. virt-cmp is a tool to compare two files or directories in a virtual machine disk image4. virt-history is a tool to show the history of a virtual machine disk image5. References:
- 21.13. virt-diff: Listing the Differences between Virtual Machine Files …
- 21.14. virt-delta: Creating Delta Disks from Two Disk Images …
- 21.6. virt-cp-in: Copying Files and Directories into a Virtual Machine Disk Image …
- 21.7. virt-cmp: Comparing Two Files or Directories in a Virtual Machine Disk Image …
- 21.8. virt-history: Showing the History of a Virtual Machine Disk Image …
FILL BLANK
What LXC command lists containers sorted by their CPU, block I/O or memory consumption? (Specify ONLY the command without any path or parameters.)
Options:
Answer:
lxc-top
Explanation:
LXD supports the following network interface types for containers: macvlan, bridged, physical, sriov, and ovn1. Macvlan creates a virtual interface on the host that is connected to the same network as the parent interface2. Bridged connects the container to a network bridge that acts as a virtual switch3. Physical attaches the container to a physical network interface on the host2. Ipsec and wifi are not valid network interface types for LXD containers. References:
- 1: Bridge network - Canonical LXD documentation
- 2: How to create a network - Canonical LXD documentation
- 4: LXD containers and networking with static IP - Super User
What is the purpose of capabilities in the context of container virtualization?
Options:
Map potentially dangerous system calls to an emulation layer provided by the container virtualization.
Restrict the disk space a container can consume.
Enable memory deduplication to cache files which exist in multiple containers.
Allow regular users to start containers with elevated permissions.
Prevent processes from performing actions which might infringe the container.
Answer:
EExplanation:
Capabilities are a way of implementing fine-grained access control in Linux. They are a set of flags that define the privileges that a process can have. By default, a process inherits the capabilities of its parent, but some capabilities can be dropped or added by the process itself or by the kernel. In the context of container virtualization, capabilities are used to prevent processes from performing actions that might infringe the container, such as accessing the host’s devices, mounting filesystems, changing the system time, or killing other processes. Capabilities allow containers to run with a reduced set of privileges, enhancing the security and isolation of the container environment. For example, Docker uses a default set of capabilities that are granted to the processes running inside a container, and allows users to add or drop capabilities as needed12. References:
- Capabilities | Docker Documentation1
- Linux Capabilities: Making Them Work in Containers2
The commandvirsh vol-list vmsreturns the following error:
error: failed to get pool 'vms'
error: Storage pool not found: no storage pool with matching name 'vms '
Given that the directory/vmsexists, which of the following commands resolves this issue?
Options:
dd if=/dev/zero of=/vms bs=1 count=0 flags=name:vms
libvirt-poolctl new –-name=/vms –-type=dir –-path=/vms
qemu-img pool vms:/vms
virsh pool-create-as vms dir --target /vms
touch /vms/.libvirtpool
Answer:
DExplanation:
The command virsh pool-create-as vms dir --target /vms creates and starts a transient storage pool named vms of type dir with the target directory /vms12. This command resolves the issue of the storage pool not found error, as it makes the existing directory /vms visible to libvirt as a storage pool. The other commands are invalid because:
- dd if=/dev/zero of=/vms bs=1 count=0 flags=name:vms is not a valid command syntax. The dd command does not take a flags argument, and the output file /vms should be a regular file, not a directory3.
- libvirt-poolctl new --name=/vms --type=dir --path=/vms is not a valid command name. There is no such command as libvirt-poolctl in the libvirt package4.
- qemu-img pool vms:/vms is not a valid command syntax. The qemu-img command does not have a pool subcommand, and the vms:/vms argument is not a valid image specification5.
- touch /vms/.libvirtpool is not a valid command to create a storage pool. The touch command only creates an empty file, and the .libvirtpool file is not recognized by libvirt as a storage pool configuration file6.
References:
- 1: virsh - difference between pool-define-as and pool-create-as - Stack Overflow
- 2: dd(1) - Linux manual page - man7.org
- 3: 12.3.3. Creating a Directory-based Storage Pool with virsh - Red Hat Customer Portal
- 4: libvirt - Linux Man Pages (3)
- 5: qemu-img(1) - Linux manual page - man7.org
- 6: touch(1) - Linux manual page - man7.org
Which file in acgroupdirectory contains the list of processes belonging to thiscgroup?
Options:
pids
members
procs
casks
subjects
Answer:
CExplanation:
The file procs in a cgroup directory contains the list of processes belonging to this cgroup. Each line in the file shows the PID of a process that is a member of the cgroup. A process can be moved to a cgroup by writing its PID into the cgroup’s procs file. For example, to move the process with PID 24982 to the cgroup cg1, the following command can be used: echo 24982 > /sys/fs/cgroup/cg1/procs1. The file procs is different from the file tasks, which lists the threads belonging to the cgroup. The file procs can be used to move all threads in a thread group at once, while the file tasks can be used to move individual threads2. References:
- Creating and organizing cgroups · cgroup2 - GitHub Pages
- Control Groups — The Linux Kernel documentation
Which of the following mechanisms are used by LXC and Docker to create containers? (Choose three.)
Options:
Linux Capabilities
Kernel Namespaces
Control Groups
POSIXACLs
File System Permissions
Answer:
A, B, CExplanation:
LXC and Docker are both container technologies that use Linux kernel features to create isolated environments for running applications. The main mechanisms that they use are:
- Linux Capabilities: These are a set of privileges that can be assigned to processes to limit their access to certain system resources or operations. For example, a process with the CAP_NET_ADMIN capability can perform network administration tasks, such as creating or deleting network interfaces. Linux capabilities allow containers to run with reduced privileges, enhancing their security and isolation.
- Kernel Namespaces: These are a way of creating separate views of the system resources for different processes. For example, a process in a mount namespace can have a different file system layout than the host or other namespaces. Kernel namespaces allow containers to have their own network interfaces, process IDs, user IDs, and other resources, without interfering with the host or other containers.
- Control Groups: These are a way of grouping processes and applying resource limits and accounting to them. For example, a control group can limit the amount of CPU, memory, disk I/O, or network bandwidth that a process or a group of processes can use. Control groups allow containers to have a fair share of the system resources and prevent them from exhausting the host resources.
POSIX ACLs and file system permissions are not mechanisms used by LXC and Docker to create containers. They are methods of controlling the access to files and directories on a file system, which can be applied to any process, not just containers.
References:
- LXC vs Docker: Which Container Platform Is Right for You?
- LXC vs Docker: Why Docker is Better in 2023 | UpGuard
- What is the Difference Between LXC, LXD and Docker Containers
- lxc - Which container implementation docker is using - Unix & Linux Stack Exchange
Which of the following statements are true regarding VirtualBox?
Options:
It is a hypervisor designed as a special kernel that is booted before the first regular operating system starts.
It only supports Linux as a guest operating system and cannot run Windows inside a virtual machine.
It requires dedicated shared storage, as it cannot store virtual machine disk images locally on block devices of the virtualization host.
It provides both a graphical user interface and command line tools to administer virtual machines.
It is available for Linux only and requires the source code of the currently running Linux kernel to be available.
Answer:
DExplanation:
VirtualBox is a hosted hypervisor, which means it runs as an application on top of an existing operating system, not as a special kernel that is booted before the first regular operating system starts1. VirtualBox supports a large number of guest operating systems, including Windows, Linux, Solaris, OS/2, and OpenBSD1. VirtualBox does not require dedicated shared storage, as it can store virtual machine disk images locally on block devices of the virtualization host, or on network shares, or on iSCSI targets1. VirtualBox provides both a graphical user interface (GUI) and command line tools (VBoxManage) to administer virtual machines1. VirtualBox is available for Windows, Linux, macOS, and Solaris hosts1, and does not require the source code of the currently running Linux kernel to be available. References:
- Oracle VM VirtualBox: Features Overview
After setting up a data container using the following command:
docker create -v /data --name datastore debian /bin/true
how is an additional new container started which shares the/datavolume with the datastore container?
Options:
docker run --share-with datastore --name service debian bash
docker run -v datastore:/data --name service debian bash
docker run --volumes-from datastore --name service debian bash
docker run -v /data --name service debian bash
docker run --volume-backend datastore -v /data --name service debian bash
Answer:
CExplanation:
The correct way to start a new container that shares the /data volume with the datastore container is to use the --volumes-from flag. This flag mounts all the defined volumes from the referenced containers. In this case, the datastore container has a volume named /data, which is mounted in the service container at the same path. The other options are incorrect because they either use invalid flags, such as --share-with or --volume-backend, or they create new volumes instead of sharing the existing one, such as -v datastore:/data or -v /data. References:
- Docker Docs - Volumes
- Stack Overflow - How to map volume paths using Docker’s --volumes-from?
- Docker Docs - docker run
Which of the following statements are true about sparse images in the context of virtual machine storage? (Choose two.)
Options:
Sparse images are automatically shrunk when files within the image are deleted.
Sparse images may consume an amount of space different from their nominal size.
Sparse images can only be used in conjunction with paravirtualization.
Sparse images allocate backend storage at the first usage of a block.
Sparse images are automatically resized when their maximum capacity is about to be exceeded.
Answer:
B, DExplanation:
Sparse images are a type of virtual disk images that grow in size as data is written to them, but do not shrink when data is deleted from them. Sparse images may consume an amount of space different from their nominal size, which is the maximum size that the image can grow to. For example, a sparse image with a nominal size of 100 GB may only take up 20 GB of physical storage if only 20 GB of data is written to it. Sparse images allocate backend storage at the first usage of a block, which means that the physical storage is only used when the virtual machine actually writes data to a block. This can save storage space and improve performance, as the image does not need to be pre-allocated or zeroed out.
Sparse images are not automatically shrunk when files within the image are deleted, because the virtual machine does not inform the host system about the freed blocks. To reclaim the unused space, a special tool such as virt-sparsify1 or qemu-img2 must be used to compact the image. Sparse images can be used with both full virtualization and paravirtualization, as the type of virtualization does not affect the format of the disk image. Sparse images are not automatically resized when their maximum capacity is about to be exceeded, because this would require changing the partition table and the filesystem of the image, which is not a trivial task. To resize a sparse image, a tool such as virt-resize3 or qemu-img2 must be used to increase the nominal size and the filesystem size of the image. References: 1 (search for “virt-sparsify”), 2 (search for “qemu-img”), 3 (search for “virt-resize”).
What is the default provider of Vagrant?
Options:
lxc
hyperv
virtualbox
vmware_workstation
docker
Answer:
CExplanation:
Vagrant is a tool that allows users to create and configure lightweight, reproducible, and portable development environments. Vagrant supports multiple providers, which are the backends that Vagrant uses to create and manage the virtual machines. By default, VirtualBox is the default provider for Vagrant. VirtualBox is still the most accessible platform to use Vagrant: it is free, cross-platform, and has been supported by Vagrant for years. With VirtualBox as the default provider, it provides the lowest friction for new users to get started with Vagrant. However, users can also use other providers, such as VMware, Hyper-V, Docker, or LXC, depending on their preferences and needs. To use another provider, users must install it as a Vagrant plugin and specify it when running Vagrant commands. Users can also change the default provider by setting the VAGRANT_DEFAULT_PROVIDER environmental variable. References:
- Default Provider - Providers | Vagrant | HashiCorp Developer1
- Providers | Vagrant | HashiCorp Developer2
- How To Set Default Vagrant Provider to Virtualbox3
Which of the following kinds of data cancloud-initprocess directly from user-data? (Choose three.)
Options:
Shell scripts to execute
Lists of URLs to import
ISO images to boot from
cloud-config declarations in YAML
Base64-encoded binary files to execute
Answer:
A, B, DExplanation:
Cloud-init is a tool that allows users to customize the configuration and behavior of cloud instances during the boot process. Cloud-init can process different kinds of data that are passed to the instance via user-data, which is a mechanism provided by various cloud providers to inject data into the instance. Among the kinds of data that cloud-init can process directly from user-data are:
- Shell scripts to execute: Cloud-init can execute user-data that is formatted as a shell script, starting with the #!/bin/sh or #!/bin/bash shebang. The script can contain any commands that are valid in the shell environment of the instance. The script is executed as the root user during the boot process12.
- Lists of URLs to import: Cloud-init can import user-data that is formatted as a list of URLs, separated by newlines. The URLs can point to any valid data source that cloud-init supports, such as shell scripts, cloud-config files, or include files. The URLs are fetched and processed by cloud-init in the order they appear in the list13.
- cloud-config declarations in YAML: Cloud-init can process user-data that is formatted as a cloud-config file, which is a YAML document that contains declarations for various cloud-init modules. The cloud-config file can specify various aspects of the instance configuration, such as hostname, users, packages, commands, services, and more. The cloud-config file must start with the #cloud-config header14.
The other kinds of data listed in the question are not directly processed by cloud-init from user-data. They are either not supported, not recommended, or require additional steps to be processed. These kinds of data are:
- ISO images to boot from: Cloud-init does not support booting from ISO images that are passed as user-data. ISO images are typically used to install an operating system on a physical or virtual machine, not to customize an existing cloud instance. To boot from an ISO image, the user would need to attach it as a secondary disk to the instance and configure the boot order accordingly5.
- Base64-encoded binary files to execute: Cloud-init does not recommend passing binary files as user-data, as they may not be compatible with the instance’s architecture or operating system. Base64-encoding does not change this fact, as it only converts the binary data into ASCII characters. To execute a binary file, the user would need to decode it and make it executable on the instance6.
References:
- User-Data Formats — cloud-init 22.1 documentation
- User-Data Scripts
- Include File
- Cloud Config
- How to Boot From ISO Image File Directly in Windows
- How to run a binary file as a command in the terminal?.
Which of the following statements are true regarding resource management for full virtualization? (Choose two.)
Options:
The hygervisor may provide fine-grained limits to internal elements of the guest operating system such as the number of processes.
The hypervisor provides each virtual machine with hardware of a defined capacity that limits the resources of the virtual machine.
Full virtualization cannot pose any limits to virtual machines and always assigns the host system's resources in a first-come-first-serve manner.
All processes created within the virtual machines are transparently and equally scheduled in the host system for CPU and I/O usage.
It is up to the virtual machine to use its assigned hardware resources and create, for example, an arbitrary amount of network sockets.
Answer:
B, EExplanation:
Resource management for full virtualization is the process of allocating and controlling the physical resources of the host system to the virtual machines running on it. The hypervisor is the software layer that performs this task, by providing each virtual machine with a virtual hardware of a defined capacity that limits the resources of the virtual machine. For example, the hypervisor can specify how many virtual CPUs, how much memory, and how much disk space each virtual machine can use. The hypervisor can also enforce resource isolation and prioritization among the virtual machines, to ensure that they do not interfere with each other or consume more resources than they are allowed to. The hypervisor cannot provide fine-grained limits to internal elements of the guest operating system, such as the number of processes, because the hypervisor does not have access to the internal state of the guest operating system. The guest operating system is responsible for managing its own resources within the virtual hardware provided by the hypervisor. For example, the guest operating system can create an arbitrary amount of network sockets, as long as it does not exceed the network bandwidth allocated by the hypervisor. Full virtualization can pose limits to virtual machines, and does not always assign the host system’s resources in a first-come-first-serve manner. The hypervisor can use various resource management techniques, such as reservation, limit, share, weight, and quota, to allocate and control the resources of the virtual machines. The hypervisor can also use resource scheduling algorithms, such as round-robin, fair-share, or priority-based, to distribute the resources among the virtual machines according to their needs and preferences. All processes created within the virtual machines are not transparently and equally scheduled in the host system for CPU and I/O usage. The hypervisor can use different scheduling policies, such as proportional-share, co-scheduling, or gang scheduling, to schedule the virtual CPUs of the virtual machines on the physical CPUs of the host system. The hypervisor can alsouse different I/O scheduling algorithms, such as deadline, anticipatory, or completely fair queuing, to schedule the I/O requests of the virtual machines on the physical I/O devices of the host system. The hypervisor can also use different resource accounting and monitoring mechanisms, such as cgroups, perf, or sar, to measure and report the resource consumption and performance of the virtual machines. References:
- Oracle VM VirtualBox: Features Overview
- Resource Management as an Enabling Technology for Virtualization - Oracle
- Introduction to virtualization and resource management in IaaS | Cloud Native Computing Foundation
Unlock 305-300 Features
- 305-300 All Real Exam Questions
- 305-300 Exam easy to use and print PDF format
- Download Free 305-300 Demo (Try before Buy)
- Free Frequent Updates
- 100% Passing Guarantee by Activedumpsnet
Questions & Answers PDF Demo
- 305-300 All Real Exam Questions
- 305-300 Exam easy to use and print PDF format
- Download Free 305-300 Demo (Try before Buy)
- Free Frequent Updates
- 100% Passing Guarantee by Activedumpsnet