What is the difference between crash consistent and application consistent cloning methods?
Application-consistent clones allow the pending 10 operations to finish before committing them to the database.
Both methods are the same; the only difference is speed of cloning.
Crash consistent method should be used only for the virtual machines with Fault Tolerance enabled.
Application consistent method is available only for Linux operating systems.
The difference between crash consistent and application consistent cloning methods is that the application consistent method ensures that the cloned virtual machine is in a consistent state with respect to the application and database, while the crash consistent method does not guarantee that1. The application consistent method uses the VMware snapshot or the Microsoft Volume Shadow Copy Service (VSS) to quiesce the application and flush the pending I/O operations to the disk before creating the clone, which prevents data corruption or loss2. The crash consistent method does not quiesce the application or flush the pending I/O operations, and creates the clone as if the virtual machine had crashed, which may result in data inconsistency or recovery issues3. The application consistent method is recommended for virtual machines that run database applications, such as SQL Server or Oracle, while the crash consistent method is suitable for virtual machines that run non-database applications, such as web servers or file servers4. The application consistent method is not available only for Linux operating systems (option D), as it also supports Windows Server operating systems that have VMware Tools or VSS installed5. The crash consistent method should not be used only for the virtual machines with Fault Tolerance enabled (option C), as it can be used for any virtual machine that does not require application consistency. Both methods are not the same,and the speed of cloning is not the only difference (option B), as they have different implications for the data integrity and availability of the cloned virtual machine. References:
In an HPE SimpliVity 10-node cluster, a simultaneous increase of 10 requests across several VM caused a surge in 10 traffic in a single node. Which HPE SimpliVity feature can evenly distribute the 10s across the cluster?
Resource Balancer
Management Virtual Appliance
Arbiter
Intelligent Workload optimizer
The Intelligent Workload Optimizer is a feature of HPE SimpliVity that automatically balances the virtual machine workloads across the nodes in a cluster, based on the CPU, memory, and storage utilization1. This feature helps to improve the performance and efficiency of the cluster, and to avoid hotspots and bottlenecks. The Intelligent Workload Optimizer can be configured to run periodically or on demand, and it can also take into account the affinity and anti-affinity rules for the virtual machines2. The Resource Balancer (option A) is a feature of HPE SimpliVity that automatically balances the storage capacity across the nodes in a cluster, based on the free space and deduplication ratio3. This feature helps to optimize the storage utilization and availability of the cluster, and to avoid capacity issues and data loss. The Resource Balancer does not balance the virtual machine workloads or the I/O traffic. The Management Virtual Appliance (option B) is a component of HPE SimpliVity that provides the management interface and functionality for the cluster, such as backup, restore, clone, move, and federation operations4. The Management Virtual Appliance does not balance the virtual machine workloads or the I/O traffic. The Arbiter (option C) is a component of HPE SimpliVity that provides the quorum service for the cluster, ensuring data consistency and availability in the event of a node or site failure5. The Arbiter does not balance the virtual machine workloads or the I/O traffic. References:
A customer plans to mix nodes with different drive configuration in the same cluster. What should you explain to this customer?
Mixing nodes with different drive configuration requires an additional license.
Mixing different drive configurations within a cluster results in unbalanced 10 performance.
Mixing nodes with different drive configuration is available only for AMD-based nodes.
Mixing All-flash and hybrid nodes in the same cluster is supported.
According to the HPE SimpliVity documents and learning resources, mixing nodes with different drive configurations within a cluster is not recommended, as it can result in unbalanced 10 performance and capacity utilization. This is because the HPE SimpliVity nodes use a distributed file system that replicates data across all nodes in the cluster, and the data efficiency and backup features depend on the consistent performance of the underlying storage devices. Therefore, it is best to use nodes with the same drive configuration and capacity within a cluster, and avoid mixing All-flash and hybrid nodes, or nodes with different drive types, sizes, or speeds. The other options are incorrect because they are either false or irrelevant. Mixing nodes with different drive configuration does not require an additional license, nor is it available only for AMD-based nodes. Mixing All-flash and hybrid nodes in the same cluster is not supported, as it can causeperformance and capacity issues. References: Using HPE SimpliVity Official Certification Study Guide, page 42; HPE SimpliVity networking explained; HPE SimpliVity Releases
A user deleted a file from Linux based virtual machine.
Which HPE SimpliVity data protection feature can be leveraged to recover the flie?
Restoring a virtual machine from the backup
Moving a virtual machine from DR site to the primary site
Reverting to the SimpliVity snapshot
Using file level restore option
According to the HPE SimpliVity Administration Guide1, the file level restore option allows you to recover individual files or folders from a backup of a virtual machine. Thisoption is useful when you need to restore a specific file that was accidentally deleted or corrupted, without affecting the rest of the virtual machine. The file level restore option supports both Windows and Linux based virtual machines, and can be performed using the HPE SimpliVity plugin in the vSphere Web Client. The file level restore option creates a temporary virtual machine from the backup, mounts the virtual disks, and copies the selected files or folders to a destination of your choice. The temporary virtual machine is then deleted automatically.
The other options are not suitable for recovering a single file from a Linux based virtual machine. Restoring a virtual machine from the backup would overwrite the entire virtual machine with the backup data, which may not be desirable or necessary. Moving a virtual machine from DR site to the primary site would not help if the file was deleted from both sites. Reverting to the SimpliVity snapshot would also overwrite the entire virtual machine with the snapshot data, which may not be the latest or the most relevant version. References:
A customer plans to use HPE SimpliVity RapidDR, but is concerned about vCenter Server license cost. In which deployment scenario is only a single vCenter Server required?
when less than 600 VMs is covered with a single recovery plan
when Enhanced Linked Mode is enabled for vCenter
when Intelligent Workload ptimizer is disabled
when federation is managed centrally
References:
1: HPE SimpliVity RapidDR 2: HPE SimpliVity RapidDR - Starter License - 25 VMs - CDW 3: VMware vCenter Server - VMware : VMware vCenter Server Licensing, Pricing and Packaging : [HPE SimpliVity Federation Design and Scaling Guide] : [HPE SimpliVity with HPE StoreOnce Reference Architecture] : [HPE SimpliVity Data Virtualization Platform] : [HPE SimpliVity Intelligent Workload Optimizer] : [HPE SimpliVity Stretched Cluster Guide] : [HPE SimpliVity Remote RapidDR Software Installation and Startup Service]
A customer wants to migrate virtual machines from the existing infrastructure to HPE SimpiiVlty. How can you complete this task?
Connect VMFS datastores holding the virtual machine files to HPE SimpliVity nodes, and migrate the virtual machines.
Share HPE SimpliVity datastores with standard ESXi hosts, and migrate the virtual machines.
Map VMFS datastores holding the virtual machine files as an RDM to HPE SimpliVity nodes, and migrate the virtual machines.
Add ESXi system to the HPE SimpliVity cluster, and migrate the virtual machines to HPE SimpliVity datastores.
The best way to migrate virtual machines from the existing infrastructure to HPE SimpliVity is to share HPE SimpliVity datastores with standard ESXi hosts, and migrate the virtual machines using VMware vSphere® Storage vMotion®1. This method allows the virtual machines to be moved to the HPE SimpliVity datastores without any downtime or impact on the performance. The HPE SimpliVity datastores are automatically created and managed by the HPE SimpliVity Virtual Controller, and they can be shared with any ESXi host that is in the same vCenter Server® as the HPE SimpliVity cluster2. Connecting VMFS datastores holding the virtual machine files to HPE SimpliVity nodes, and migrating the virtual machines (option A) is not a supported method, as the HPE SimpliVity nodes do not have direct access to the VMFS datastores, and they can only use the HPE SimpliVity datastores3. Mapping VMFS datastores holding the virtual machine files as an RDM to HPE SimpliVity nodes, and migrating the virtual machines (option C) is not a supported method, as the HPE SimpliVity nodes do not support RDM devices, and they can only use the HPE SimpliVity datastores4. Adding ESXi system to the HPE SimpliVity cluster, and migrating the virtual machines to HPE SimpliVity datastores (option D) is not a recommended method, as it requires the ESXi system to be converted to an HPE SimpliVity node, which involves installing the HPE OmniStack software and the HPE SimpliVity Virtual Controller, and reconfiguring the network settings. This method may cause downtime and data loss, and it is only advised for specific scenarios, such as replacing a failed node or expanding the cluster. References:
What is the difference between crash consistent and application consistent cloning methods?
Application-consistent clones allow the pending 10 operations to finish before committing them to the database.
Both methods are the same; the only difference is speed of cloning.
Crash consistent method should be used only for the virtual machines with Fault Tolerance enabled.
Application consistent method is available only for Linux operating systems.
According to the HPE SimpliVity documents and learning resources, the difference between crash consistent and application consistent cloning methods is that the application consistent method ensures that the cloned virtual machine is in a consistent state with respect to the application data and transactions. This means that the application consistent method allows the pending 10 operations to finish before committing them to the database, and flushes the memory buffers and caches to the disk. This ensures that the cloned virtual machine can resume the application without any data loss or corruption. The crash consistent method, on the other hand, does not guarantee that the cloned virtual machine is in a consistent state with respect tothe application data and transactions. This means that the crash consistent method does not wait for the pending 10 operations to finish or flush the memory buffers and caches to the disk. This may result in some data loss or corruption if the cloned virtual machine resumes the application. The crash consistent method is faster than the application consistent method, but less reliable. The other options are incorrect because they are either false or irrelevant. Both methods are not the same, and the difference is not only speed of cloning. The crash consistent method can be used for any virtual machine, not only for those with Fault Tolerance enabled. The application consistent method is available for both Windows and Linux operating systems, not only for Linux. References: Clone a virtual machine; HPE SimpliVity frequently asked questions, page 17; Using HPE SimpliVity Official Certification Study Guide, page 65
A customer plans to deploy HPE SimpliVity 380 Gen10 LFF H and HPE SimpliVity 380 Gen10 SFF H nodes. What should you recommend tor this setup?
Put each type of the nodes in a different federation.
Replace the LFF nodes with all-flash nodes.
Put all of the nodes in the same cluster.
Put SFF and LFF nodes in separate clusters.
References:
1: HPE SimpliVity 380 Gen10 LFF H Node Data sheet 2: HPE SimpliVity 380 Gen10 SFF H Node Data sheet 3: HPE SimpliVity Data Virtualization Platform 4: HPE SimpliVity Federation Design and Scaling Guide : [HPE SimpliVity 380 Gen10 Node Data sheet - PSNow] : [HPE SimpliVity 380 Gen10 Node | HPE Store US] : [HPE SimpliVity Intelligent Workload Optimizer] : [HPE SimpliVity with HPE StoreOnce Reference Architecture] : [HPE SimpliVity Stretched Cluster Guide] : [HPE SimpliVity Remote RapidDR Software Installation and Startup Service]
A customer plans to add HPE SimpliVity 380 Gen10 H SFF nodes to their remote site. The primary site is running HPE SimpliVity 380 Gen10 hardware accelerated nodes. Which recommendation should be applied for this design?
Create a new cluster and split Doth types of nodes equally between both clusters,
Add new nodes to the existing cluster
Configure the HPE SimpliVity 380 Gen 10 H SFF with a hardware accelerator card.
Create a new cluster with SimpliVity 380 Gen10 H SFF nodes.
According to the HPE SimpliVity documents and learning resources, the best practice for designing an HPE SimpliVity solution is to use nodes with the same drive configuration and capacity within a cluster, and avoid mixing All-flash and hybrid nodes, or nodes with different drive types, sizes, or speeds. This is because mixing nodes with different drive configurations can result in unbalanced 10 performance and capacity utilization, as well as compatibility issues.Therefore, the recommended option for this design is to create a new cluster with HPE SimpliVity 380 Gen10 H SFF nodes at the remote site, and keep the existing cluster with HPE SimpliVity 380 Gen10 hardware accelerated nodes at the primary site. The other options are incorrect because they do not follow the best practice and can cause performance and capacity problems. References: Using HPE SimpliVity Official Certification Study Guide, page 42; HPE SimpliVity networking explained; HPE SimpliVity Releases
How is each datastore presented to HPE SimpliVity nodes?
as a part of vSAN
as a VMFS datastore
as an NFS datastore
as a vVol
According to the HPE SimpliVity documents and learning resources, each datastore in an HPE SimpliVity cluster is presented to the HPE SimpliVity nodes as a VMFS datastore. A VMFS datastore is a VMware-specific file system that allows multiple ESXi hosts to access the same storage device concurrently. HPE SimpliVity uses VMFS datastores to store virtual machine files and provide shared storage to all virtual machines on the hosts. HPE SimpliVity does not use vSAN, NFS, or vVol as the datastore type. References: Using HPE SimpliVity Official Certification Study Guide, page 41; SimpliVity Video - How to Create a HPE SimpliVity Datastore
A customer wants to provide access to the HPE SimpliVity datastores for compute nodes running CPU-intensive virtual machines. What should you tell the customer?
It is supported to connect up to 5 compute nodes per SimpliVity node.
Additional license is required to connect ESXi nodes to SimpliVity datastores.
Connecting ESXi compute nodes is possible only when VMFS datastores are configured at SimpliVity Federation level
Compute nodes must reside in the same cluster as HPE SimpliVity nodes.
Compute nodes are x86 servers that run ESXi and consume the HPE SimpliVity datastores for storage. They provide additional CPU and memory resources to the environment without requiring additional SimpliVity licenses. However, they must reside in the same cluster as HPE SimpliVity nodes, as this is a requirement for the SimpliVity Data Virtualization Platform (DVP) to function properly. The number of compute nodes that can be connected per SimpliVity node is not limited to 5, but depends on the available network bandwidth and storage capacity. Additional license is not required to connect ESXi nodes to SimpliVity datastores, as this is a native capability of the SimpliVity DVP. Connecting ESXi compute nodes is not possible only when VMFS datastores are configured at SimpliVity Federation level, as this is not a prerequisite for the SimpliVity DVP. References: Using HPE SimpliVity Official Certification Study Guide, page 29, section 2.2. Preparing Compute Nodes to Use SimpliVity DVP, section “Preparing Compute Nodes to Use SimpliVity DVP”. HPE SimpliVity 380 Gen10 Node, section “Features”.
A customer has development virtual machines that do not require storage HA. How can the customer save storage capacity within an HPE SimpliVity cluster?
By disabling HA cluster functionality for HPE SimpliVity Federation
By placing them on HPE Storence instead of HPE SimpliVity
By disabling the HA feature for only these virtual machines at vCenter Server
By creating a single-replica datastore for these virtual machines
HPE SimpliVity provides storage HA by creating two copies of each virtual machine data across different nodes in a cluster. This ensures that the virtual machines can continue to run even if one node fails. However, this also consumes twice the storage capacity for each virtual machine. For development virtual machines that do not require storage HA, the customer can save storage capacity by creating a single-replica datastore for these virtual machines. A single-replica datastore is a datastore that has only one copy of the virtual machine data on one node. This reduces the storage consumption by 50%, but also increases the risk of data loss and unavailability in case of node failure. Therefore, the customer should carefully weigh the trade-offs between storage efficiency and data protection when using a single-replica datastore. References: HPE SimpliVity Data Virtualization Platform; HPE SimpliVity User Guide