QEMU full virtualization – CPU emulations (enable/disable CPU flags/instruction sets) of QEMU 8.0

Yet another update to this QEMU series after the versions 2.0.0QEMU full virtualization – CPU emulations (enable/disable CPU flags/instruction sets) of QEMU 2.0.0 and version 6.2QEMU full virtualization – CPU emulations (enable/disable CPU flags/instruction sets) of QEMU 6.2.0

main menu
hot add CPU

The latest version of QEMU is 8.0.4 and it offers way more CPU flags and features! You can use QEMU with nearly native full virtualization. Here are some important tips for the guest CPU to consider when using QEMU directly (without any virtualization manager like virt-manager, libvirt and so on).

TIP 1) CPU emulation of x86

You can see what options are available for host emulation with:

root@srv ~ # qemu-system-x86_64 -cpu help
Available CPUs:
x86 486                   (alias configured by machine type)
x86 486-v1                
x86 Broadwell             (alias configured by machine type)
x86 Broadwell-IBRS        (alias of Broadwell-v3)
x86 Broadwell-noTSX       (alias of Broadwell-v2)
x86 Broadwell-noTSX-IBRS  (alias of Broadwell-v4)
x86 Broadwell-v1          Intel Core Processor (Broadwell)
x86 Broadwell-v2          Intel Core Processor (Broadwell, no TSX)
x86 Broadwell-v3          Intel Core Processor (Broadwell, IBRS)
x86 Broadwell-v4          Intel Core Processor (Broadwell, no TSX, IBRS)
x86 Cascadelake-Server    (alias configured by machine type)
x86 Cascadelake-Server-noTSX  (alias of Cascadelake-Server-v3)
x86 Cascadelake-Server-v1  Intel Xeon Processor (Cascadelake)
x86 Cascadelake-Server-v2  Intel Xeon Processor (Cascadelake) [ARCH_CAPABILITIES]
x86 Cascadelake-Server-v3  Intel Xeon Processor (Cascadelake) [ARCH_CAPABILITIES, no TSX]
x86 Cascadelake-Server-v4  Intel Xeon Processor (Cascadelake) [ARCH_CAPABILITIES, no TSX]
x86 Cascadelake-Server-v5  Intel Xeon Processor (Cascadelake) [ARCH_CAPABILITIES, EPT switching, XSAVES, no TSX]
x86 Conroe                (alias configured by machine type)
x86 Conroe-v1             Intel Celeron_4x0 (Conroe/Merom Class Core 2)
x86 Cooperlake            (alias configured by machine type)
x86 Cooperlake-v1         Intel Xeon Processor (Cooperlake)
x86 Cooperlake-v2         Intel Xeon Processor (Cooperlake) [XSAVES]
x86 Denverton             (alias configured by machine type)
x86 Denverton-v1          Intel Atom Processor (Denverton)
x86 Denverton-v2          Intel Atom Processor (Denverton) [no MPX, no MONITOR]
x86 Denverton-v3          Intel Atom Processor (Denverton) [XSAVES, no MPX, no MONITOR]
x86 Dhyana                (alias configured by machine type)
x86 Dhyana-v1             Hygon Dhyana Processor
x86 Dhyana-v2             Hygon Dhyana Processor [XSAVES]
x86 EPYC                  (alias configured by machine type)
x86 EPYC-IBPB             (alias of EPYC-v2)
x86 EPYC-Milan            (alias configured by machine type)
x86 EPYC-Milan-v1         AMD EPYC-Milan Processor
x86 EPYC-Rome             (alias configured by machine type)
x86 EPYC-Rome-v1          AMD EPYC-Rome Processor
x86 EPYC-Rome-v2          AMD EPYC-Rome Processor
x86 EPYC-v1               AMD EPYC Processor
x86 EPYC-v2               AMD EPYC Processor (with IBPB)
x86 EPYC-v3               AMD EPYC Processor
x86 Haswell               (alias configured by machine type)
x86 Haswell-IBRS          (alias of Haswell-v3)
x86 Haswell-noTSX         (alias of Haswell-v2)
x86 Haswell-noTSX-IBRS    (alias of Haswell-v4)
x86 Haswell-v1            Intel Core Processor (Haswell)
x86 Haswell-v2            Intel Core Processor (Haswell, no TSX)
x86 Haswell-v3            Intel Core Processor (Haswell, IBRS)
x86 Haswell-v4            Intel Core Processor (Haswell, no TSX, IBRS)
x86 Icelake-Server        (alias configured by machine type)
x86 Icelake-Server-noTSX  (alias of Icelake-Server-v2)
x86 Icelake-Server-v1     Intel Xeon Processor (Icelake)
x86 Icelake-Server-v2     Intel Xeon Processor (Icelake) [no TSX]
x86 Icelake-Server-v3     Intel Xeon Processor (Icelake)
x86 Icelake-Server-v4     Intel Xeon Processor (Icelake)
x86 Icelake-Server-v5     Intel Xeon Processor (Icelake) [XSAVES]
x86 Icelake-Server-v6     Intel Xeon Processor (Icelake) [5-level EPT]
x86 IvyBridge             (alias configured by machine type)
x86 IvyBridge-IBRS        (alias of IvyBridge-v2)
x86 IvyBridge-v1          Intel Xeon E3-12xx v2 (Ivy Bridge)
x86 IvyBridge-v2          Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS)
x86 KnightsMill           (alias configured by machine type)
x86 KnightsMill-v1        Intel Xeon Phi Processor (Knights Mill)
x86 Nehalem               (alias configured by machine type)
x86 Nehalem-IBRS          (alias of Nehalem-v2)
x86 Nehalem-v1            Intel Core i7 9xx (Nehalem Class Core i7)
x86 Nehalem-v2            Intel Core i7 9xx (Nehalem Core i7, IBRS update)
x86 Opteron_G1            (alias configured by machine type)
x86 Opteron_G1-v1         AMD Opteron 240 (Gen 1 Class Opteron)
x86 Opteron_G2            (alias configured by machine type)
x86 Opteron_G2-v1         AMD Opteron 22xx (Gen 2 Class Opteron)
x86 Opteron_G3            (alias configured by machine type)
x86 Opteron_G3-v1         AMD Opteron 23xx (Gen 3 Class Opteron)
x86 Opteron_G4            (alias configured by machine type)
x86 Opteron_G4-v1         AMD Opteron 62xx class CPU
x86 Opteron_G5            (alias configured by machine type)
x86 Opteron_G5-v1         AMD Opteron 63xx class CPU
x86 Penryn                (alias configured by machine type)
x86 Penryn-v1             Intel Core 2 Duo P9xxx (Penryn Class Core 2)
x86 SandyBridge           (alias configured by machine type)
x86 SandyBridge-IBRS      (alias of SandyBridge-v2)
x86 SandyBridge-v1        Intel Xeon E312xx (Sandy Bridge)
x86 SandyBridge-v2        Intel Xeon E312xx (Sandy Bridge, IBRS update)
x86 SapphireRapids        (alias configured by machine type)
x86 SapphireRapids-v1     Intel Xeon Processor (SapphireRapids)
x86 Skylake-Client        (alias configured by machine type)
x86 Skylake-Client-IBRS   (alias of Skylake-Client-v2)
x86 Skylake-Client-noTSX-IBRS  (alias of Skylake-Client-v3)
x86 Skylake-Client-v1     Intel Core Processor (Skylake)
x86 Skylake-Client-v2     Intel Core Processor (Skylake, IBRS)
x86 Skylake-Client-v3     Intel Core Processor (Skylake, IBRS, no TSX)
x86 Skylake-Client-v4     Intel Core Processor (Skylake, IBRS, no TSX) [IBRS, XSAVES, no TSX]
x86 Skylake-Server        (alias configured by machine type)
x86 Skylake-Server-IBRS   (alias of Skylake-Server-v2)
x86 Skylake-Server-noTSX-IBRS  (alias of Skylake-Server-v3)
x86 Skylake-Server-v1     Intel Xeon Processor (Skylake)
x86 Skylake-Server-v2     Intel Xeon Processor (Skylake, IBRS)
x86 Skylake-Server-v3     Intel Xeon Processor (Skylake, IBRS, no TSX)
x86 Skylake-Server-v4     Intel Xeon Processor (Skylake, IBRS, no TSX)
x86 Skylake-Server-v5     Intel Xeon Processor (Skylake, IBRS, no TSX) [IBRS, XSAVES, EPT switching, no TSX]
x86 Snowridge             (alias configured by machine type)
x86 Snowridge-v1          Intel Atom Processor (SnowRidge)
x86 Snowridge-v2          Intel Atom Processor (Snowridge, no MPX)
x86 Snowridge-v3          Intel Atom Processor (Snowridge, no MPX) [XSAVES, no MPX]
x86 Snowridge-v4          Intel Atom Processor (Snowridge, no MPX) [no split lock detect, no core-capability]
x86 Westmere              (alias configured by machine type)
x86 Westmere-IBRS         (alias of Westmere-v2)
x86 Westmere-v1           Westmere E56xx/L56xx/X56xx (Nehalem-C)
x86 Westmere-v2           Westmere E56xx/L56xx/X56xx (IBRS update)
x86 athlon                (alias configured by machine type)
x86 athlon-v1             QEMU Virtual CPU version 2.5+
x86 core2duo              (alias configured by machine type)
x86 core2duo-v1           Intel(R) Core(TM)2 Duo CPU     T7700  @ 2.40GHz
x86 coreduo               (alias configured by machine type)
x86 coreduo-v1            Genuine Intel(R) CPU           T2600  @ 2.16GHz
x86 kvm32                 (alias configured by machine type)
x86 kvm32-v1              Common 32-bit KVM processor
x86 kvm64                 (alias configured by machine type)
x86 kvm64-v1              Common KVM processor
x86 n270                  (alias configured by machine type)
x86 n270-v1               Intel(R) Atom(TM) CPU N270   @ 1.60GHz
x86 pentium               (alias configured by machine type)
x86 pentium-v1            
x86 pentium2              (alias configured by machine type)
x86 pentium2-v1           
x86 pentium3              (alias configured by machine type)
x86 pentium3-v1           
x86 phenom                (alias configured by machine type)
x86 phenom-v1             AMD Phenom(tm) 9550 Quad-Core Processor
x86 qemu32                (alias configured by machine type)
x86 qemu32-v1             QEMU Virtual CPU version 2.5+
x86 qemu64                (alias configured by machine type)
x86 qemu64-v1             QEMU Virtual CPU version 2.5+
x86 base                  base CPU model type with no features enabled
x86 host                  processor with all supported host features 
x86 max                   Enables all features supported by the accelerator in the current host

Recognized CPUID flags:
  3dnow 3dnowext 3dnowprefetch abm ace2 ace2-en acpi adx aes amd-no-ssb
  amd-ssbd amd-stibp amx-bf16 amx-int8 amx-tile apic arat arch-capabilities
  arch-lbr avic avx avx-vnni avx2 avx512-4fmaps avx512-4vnniw avx512-bf16
  avx512-fp16 avx512-vp2intersect avx512-vpopcntdq avx512bitalg avx512bw
  avx512cd avx512dq avx512er avx512f avx512ifma avx512pf avx512vbmi
  avx512vbmi2 avx512vl avx512vnni bmi1 bmi2 bus-lock-detect cid cldemote
  clflush clflushopt clwb clzero cmov cmp-legacy core-capability cr8legacy
  cx16 cx8 dca de decodeassists ds ds-cpl dtes64 erms est extapic f16c
  flushbyasid fma fma4 fpu fsgsbase fsrc fsrm fsrs full-width-write fxsr
  fxsr-opt fzrm gfni hle ht hypervisor ia64 ibpb ibrs ibrs-all ibs intel-pt
  intel-pt-lip invpcid invtsc kvm-asyncpf kvm-asyncpf-int
  kvm-hint-dedicated kvm-mmu kvm-msi-ext-dest-id kvm-nopiodelay
  kvm-poll-control kvm-pv-eoi kvm-pv-ipi kvm-pv-sched-yield
  kvm-pv-tlb-flush kvm-pv-unhalt kvm-steal-time kvmclock kvmclock
  kvmclock-stable-bit la57 lahf-lm lbrv lm lwp mca mce md-clear mds-no
  misalignsse mmx mmxext monitor movbe movdir64b movdiri mpx msr mtrr
  nodeid-msr npt nrip-save nx osvw pae pat pause-filter pbe pcid pclmulqdq
  pcommit pdcm pdpe1gb perfctr-core perfctr-nb pfthreshold pge phe phe-en
  pks pku pmm pmm-en pn pni popcnt pschange-mc-no pse pse36 rdctl-no rdpid
  rdrand rdseed rdtscp rsba rtm sep serialize sgx sgx-aex-notify sgx-debug
  sgx-edeccssa sgx-exinfo sgx-kss sgx-mode64 sgx-provisionkey sgx-tokenkey
  sgx1 sgx2 sgxlc sha-ni skinit skip-l1dfl-vmentry smap smep smx spec-ctrl
  split-lock-detect ss ssb-no ssbd sse sse2 sse4.1 sse4.2 sse4a ssse3 stibp
  svm svm-lock svme-addr-chk syscall taa-no tbm tce tm tm2 topoext tsc
  tsc-adjust tsc-deadline tsc-scale tsx-ctrl tsx-ldtrk umip v-vmsave-vmload
  vaes vgif virt-ssbd vmcb-clean vme vmx vmx-activity-hlt
  vmx-activity-shutdown vmx-activity-wait-sipi vmx-apicv-register
  vmx-apicv-vid vmx-apicv-x2apic vmx-apicv-xapic vmx-cr3-load-noexit
  vmx-cr3-store-noexit vmx-cr8-load-exit vmx-cr8-store-exit vmx-desc-exit
  vmx-encls-exit vmx-entry-ia32e-mode vmx-entry-load-bndcfgs
  vmx-entry-load-efer vmx-entry-load-pat vmx-entry-load-perf-global-ctrl
  vmx-entry-load-pkrs vmx-entry-load-rtit-ctl vmx-entry-noload-debugctl
  vmx-ept vmx-ept-1gb vmx-ept-2mb vmx-ept-advanced-exitinfo
  vmx-ept-execonly vmx-eptad vmx-eptp-switching vmx-exit-ack-intr
  vmx-exit-clear-bndcfgs vmx-exit-clear-rtit-ctl vmx-exit-load-efer
  vmx-exit-load-pat vmx-exit-load-perf-global-ctrl vmx-exit-load-pkrs
  vmx-exit-nosave-debugctl vmx-exit-save-efer vmx-exit-save-pat
  vmx-exit-save-preemption-timer vmx-flexpriority vmx-hlt-exit vmx-ins-outs
  vmx-intr-exit vmx-invept vmx-invept-all-context vmx-invept-single-context
  vmx-invept-single-context vmx-invept-single-context-noglobals
  vmx-invlpg-exit vmx-invpcid-exit vmx-invvpid vmx-invvpid-all-context
  vmx-invvpid-single-addr vmx-io-bitmap vmx-io-exit vmx-monitor-exit
  vmx-movdr-exit vmx-msr-bitmap vmx-mtf vmx-mwait-exit vmx-nmi-exit
  vmx-page-walk-4 vmx-page-walk-5 vmx-pause-exit vmx-ple vmx-pml
  vmx-posted-intr vmx-preemption-timer vmx-rdpmc-exit vmx-rdrand-exit
  vmx-rdseed-exit vmx-rdtsc-exit vmx-rdtscp-exit vmx-secondary-ctls
  vmx-shadow-vmcs vmx-store-lma vmx-true-ctls vmx-tsc-offset
  vmx-tsc-scaling vmx-unrestricted-guest vmx-vintr-pending vmx-vmfunc
  vmx-vmwrite-vmexit-fields vmx-vnmi vmx-vnmi-pending vmx-vpid
  vmx-wbinvd-exit vmx-xsaves vmx-zero-len-inject vpclmulqdq waitpkg
  wbnoinvd wdt x2apic xcrypt xcrypt-en xfd xgetbv1 xop xsave xsavec
  xsaveerptr xsaveopt xsaves xstore xstore-en xtpr

Keep on reading!

How to run QEMU full virtualization with MacVTap networking using NetworkManager under CentOS 8

In addition to the previously presented article on the subject Howto do QEMU full virtualization with MacVTap networking this one shows how to run a QEMU virtual machine with a MAcVTap device in bridge mode on the host server configured only by using the NetworkManager cli – nmcli.

It is worth mentioning the MacVTap is a virtual bridge, which will make the host and the guest device show up directly on the host switch. So when using QEMU, the guest virtualized system will be as if it is connected to the host switch with one limitation – the host and guest cannot communicate with each other. The IPs of the host won’t be reachable from the guest, so NAT (masquerade) between the host and guest is not possible with this setup. Still, if the NAT server is on another server or a real IP is planned for the guest, MacVTap is the right functionality to use with the QEMU guest system.

Summary

  1. Add MacVTap device in bridge mode with name macvtap0.
  2. Install QEMU.
  3. Create QEMU local disk.
  4. Run a QEMU virtual server.

STEP 1) Add MacVTap device in bridge mode with name macvtap0

[root@srv ~]# nmcli connection add type macvlan dev enp0s3 mode bridge tap yes ifname macvtap0 con-name macvtap0 ip4 0.0.0.0/24
Connection 'macvtap0' (7a5ef04c-ea98-4642-ac5d-4239f715f631) successfully added.
[root@srv ~]# nmcli con
NAME      UUID                                  TYPE      DEVICE   
enp0s3    09497bbf-da59-42b7-a72c-d69369760b36  ethernet  enp0s3   
macvtap0  7a5ef04c-ea98-4642-ac5d-4239f715f631  macvlan   macvtap0 

First, create a MacVTap device with the name macvtap0 in bridge mode with the network interface enp0s3 a and a connection with the name macvtap0. The IP is set to manual mode.
More detailed information on how to create and add MacVTap device with the NetworkManager here – Create MacVTap device using NetworkManager nmcli under CentOS 8

STEP 2) Install QEMU.

Install the QEMU virtual tools under CentOS 8 Stream. At present, the QEMU version is 6.2, which is pretty new.
Keep on reading!

How to run QEMU full virtualization with bridged networking using NetworkManager under CentOS 8

In addition to the previously presented article on the subject Howto do QEMU full virtualization with bridged networking this one shows how to run a QEMU virtual machine with a bridge networking on the host server configured only by using the NetworkManager cli – nmcli.

It is worth mentioning the bridge interface presented in this article is a local bridge device for the server and no Internet addresses or real (or main or Internet-connected) network cards are bound to it. So no MAC addresses of slaved bridged devices will leave the server.
If a network bridge, which includes the Internet (main) server network device is needed, for example, to set real IPs in a virtual machine, there is another article on the bridge networking subject – Replace current interface configuration with a bridge device using nmcli (NetworkManager)

Summary

  1. Add bridge and TUN/TAP device.
  2. Install QEMU.
  3. Create QEMU local disk.
  4. Run a QEMU virtual server.

STEP 1) Add bridge and TUN/TAP device.

[root@srv ~]# nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.addresses "192.168.0.1/24"
Connection 'br0' (ad6878c8-1e06-4af8-a81f-1eb39e761df8) successfully added.
[root@srv ~]# nmcli connection up br0
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
[root@srv ~]# nmcli connection add type tun ifname tap0 con-name tap0 mode tap owner 0 ip4 0.0.0.0/24
Connection 'tap0' (dacee2be-a14b-4cf5-83d4-96d072a96725) successfully added.
[root@srv ~]# nmcli con add type bridge-slave ifname tap0 master br0
Connection 'bridge-slave-tap0' (66490382-b239-4eb2-ae1d-ee811e39596c) successfully added.
[root@srv ~]# nmcli con
NAME               UUID                                  TYPE      DEVICE 
System eno1        abf4c85b-57cc-4484-4fa9-b4a71689c359  ethernet  eno1   
br0                ad6878c8-1e06-4af8-a81f-1eb39e761df8  bridge    br0    
tap0               dacee2be-a14b-4cf5-83d4-96d072a96725  tun       tap0   
bridge-slave-tap0  66490382-b239-4eb2-ae1d-ee811e39596c  ethernet  -- 

First, a bridge device is added with manual IP. If the IP is skipped the bridge interface br0 would have DHCP enabled by default, which may not be the desired.
More detailed information on how to create and add TUN/TAP device with the NetworkManager here – Create bridge and add TUN/TAP device using NetworkManager nmcli under CentOS 8

STEP 2) Install QEMU.

Install the QEMU virtual tools under CentOS 8 Stream. At present, the QEMU version is 6.2, which is pretty new.
Keep on reading!

QEMU full virtualization – CPU emulations (enable/disable CPU flags/instruction sets) of QEMU 6.2.0

This article is an updated version of the old QEMU article about CPU flags available for version 2.0.0QEMU full virtualization – CPU emulations (enable/disable CPU flags/instruction sets) of QEMU 2.0.0.
The latest version of QEMU is 6.2.0 and it offers way more CPU flags and features! You can use QEMU with a nearly native full virtualization. Here are some important tips for the guest CPU to consider when using QEMU directly (without any virtualization manager like virt-manager, libvirt and so on).

TIP 1)Choose your host CPU emulation

You can see what options are available for host emulation with:

root@srv ~ # qemu-system-x86_64 -cpu help
Available CPUs:
x86 486                   (alias configured by machine type)                        
x86 486-v1                                                                          
x86 Broadwell             (alias configured by machine type)                        
x86 Broadwell-IBRS        (alias of Broadwell-v3)                                   
x86 Broadwell-noTSX       (alias of Broadwell-v2)                                   
x86 Broadwell-noTSX-IBRS  (alias of Broadwell-v4)                                   
x86 Broadwell-v1          Intel Core Processor (Broadwell)                          
x86 Broadwell-v2          Intel Core Processor (Broadwell, no TSX)                  
x86 Broadwell-v3          Intel Core Processor (Broadwell, IBRS)                    
x86 Broadwell-v4          Intel Core Processor (Broadwell, no TSX, IBRS)            
x86 Cascadelake-Server    (alias configured by machine type)                        
x86 Cascadelake-Server-noTSX  (alias of Cascadelake-Server-v3)                          
x86 Cascadelake-Server-v1  Intel Xeon Processor (Cascadelake)                        
x86 Cascadelake-Server-v2  Intel Xeon Processor (Cascadelake) [ARCH_CAPABILITIES]    
x86 Cascadelake-Server-v3  Intel Xeon Processor (Cascadelake) [ARCH_CAPABILITIES, no TSX]
x86 Cascadelake-Server-v4  Intel Xeon Processor (Cascadelake) [ARCH_CAPABILITIES, no TSX]
x86 Conroe                (alias configured by machine type)                        
x86 Conroe-v1             Intel Celeron_4x0 (Conroe/Merom Class Core 2)             
x86 Cooperlake            (alias configured by machine type)                        
x86 Cooperlake-v1         Intel Xeon Processor (Cooperlake)                         
x86 Denverton             (alias configured by machine type)                        
x86 Denverton-v1          Intel Atom Processor (Denverton)                          
x86 Denverton-v2          Intel Atom Processor (Denverton) [no MPX, no MONITOR]     
x86 Dhyana                (alias configured by machine type)                        
x86 Dhyana-v1             Hygon Dhyana Processor                                    
x86 EPYC                  (alias configured by machine type)                        
x86 EPYC-IBPB             (alias of EPYC-v2)                                        
x86 EPYC-Milan            (alias configured by machine type)                        
x86 EPYC-Milan-v1         AMD EPYC-Milan Processor                                  
x86 EPYC-Rome             (alias configured by machine type)                        
x86 EPYC-Rome-v1          AMD EPYC-Rome Processor                                   
x86 EPYC-Rome-v2          AMD EPYC-Rome Processor                                   
x86 EPYC-v1               AMD EPYC Processor                                        
x86 EPYC-v2               AMD EPYC Processor (with IBPB)                            
x86 EPYC-v3               AMD EPYC Processor                                        
x86 Haswell               (alias configured by machine type)                        
x86 Haswell-IBRS          (alias of Haswell-v3)                                     
x86 Haswell-noTSX         (alias of Haswell-v2)                                     
x86 Haswell-noTSX-IBRS    (alias of Haswell-v4)                                     
x86 Haswell-v1            Intel Core Processor (Haswell)                            
x86 Haswell-v2            Intel Core Processor (Haswell, no TSX)                    
x86 Haswell-v3            Intel Core Processor (Haswell, IBRS)                      
x86 Haswell-v4            Intel Core Processor (Haswell, no TSX, IBRS)              
x86 Icelake-Client        (alias configured by machine type)                        
x86 Icelake-Client-noTSX  (alias of Icelake-Client-v2)                              
x86 Icelake-Client-v1     Intel Core Processor (Icelake) [deprecated]               
x86 Icelake-Client-v2     Intel Core Processor (Icelake) [no TSX, deprecated]       
x86 Icelake-Server        (alias configured by machine type)                        
x86 Icelake-Server-noTSX  (alias of Icelake-Server-v2)                              
x86 Icelake-Server-v1     Intel Xeon Processor (Icelake)                            
x86 Icelake-Server-v2     Intel Xeon Processor (Icelake) [no TSX]                   
x86 Icelake-Server-v3     Intel Xeon Processor (Icelake)                            
x86 Icelake-Server-v4     Intel Xeon Processor (Icelake)                            
x86 IvyBridge             (alias configured by machine type)                        
x86 IvyBridge-IBRS        (alias of IvyBridge-v2)                                   
x86 IvyBridge-v1          Intel Xeon E3-12xx v2 (Ivy Bridge)                        
x86 IvyBridge-v2          Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS)                  
x86 KnightsMill           (alias configured by machine type)                        
x86 KnightsMill-v1        Intel Xeon Phi Processor (Knights Mill)                   
x86 Nehalem               (alias configured by machine type)                        
x86 Nehalem-IBRS          (alias of Nehalem-v2)                                     
x86 Nehalem-v1            Intel Core i7 9xx (Nehalem Class Core i7)                 
x86 Nehalem-v2            Intel Core i7 9xx (Nehalem Core i7, IBRS update)          
x86 Opteron_G1            (alias configured by machine type)                        
x86 Opteron_G1-v1         AMD Opteron 240 (Gen 1 Class Opteron)                     
x86 Opteron_G2            (alias configured by machine type)                        
x86 Opteron_G2-v1         AMD Opteron 22xx (Gen 2 Class Opteron)                    
x86 Opteron_G3            (alias configured by machine type)                        
x86 Opteron_G3-v1         AMD Opteron 23xx (Gen 3 Class Opteron)                    
x86 Opteron_G4            (alias configured by machine type)                        
x86 Opteron_G4-v1         AMD Opteron 62xx class CPU                                
x86 Opteron_G5            (alias configured by machine type)                        
x86 Opteron_G5-v1         AMD Opteron 63xx class CPU                                
x86 Penryn                (alias configured by machine type)                        
x86 Penryn-v1             Intel Core 2 Duo P9xxx (Penryn Class Core 2)              
x86 SandyBridge           (alias configured by machine type)                        
x86 SandyBridge-IBRS      (alias of SandyBridge-v2)                                 
x86 SandyBridge-v1        Intel Xeon E312xx (Sandy Bridge)                          
x86 SandyBridge-v2        Intel Xeon E312xx (Sandy Bridge, IBRS update)             
x86 Skylake-Client        (alias configured by machine type)                        
x86 Skylake-Client-IBRS   (alias of Skylake-Client-v2)                              
x86 Skylake-Client-noTSX-IBRS  (alias of Skylake-Client-v3)                              
x86 Skylake-Client-v1     Intel Core Processor (Skylake)                            
x86 Skylake-Client-v2     Intel Core Processor (Skylake, IBRS)                      
x86 Skylake-Client-v3     Intel Core Processor (Skylake, IBRS, no TSX)              
x86 Skylake-Server        (alias configured by machine type)                        
x86 Skylake-Server-IBRS   (alias of Skylake-Server-v2)                              
x86 Skylake-Server-noTSX-IBRS  (alias of Skylake-Server-v3)                              
x86 Skylake-Server-v1     Intel Xeon Processor (Skylake)                            
x86 Skylake-Server-v2     Intel Xeon Processor (Skylake, IBRS)                      
x86 Skylake-Server-v3     Intel Xeon Processor (Skylake, IBRS, no TSX)              
x86 Skylake-Server-v4     Intel Xeon Processor (Skylake, IBRS, no TSX)              
x86 Snowridge             (alias configured by machine type)                        
x86 Snowridge-v1          Intel Atom Processor (SnowRidge)                          
x86 Snowridge-v2          Intel Atom Processor (Snowridge, no MPX)                  
x86 Westmere              (alias configured by machine type)                        
x86 Westmere-IBRS         (alias of Westmere-v2)                                    
x86 Westmere-v1           Westmere E56xx/L56xx/X56xx (Nehalem-C)                    
x86 Westmere-v2           Westmere E56xx/L56xx/X56xx (IBRS update)                  
x86 athlon                (alias configured by machine type)                        
x86 athlon-v1             QEMU Virtual CPU version 2.5+                             
x86 core2duo              (alias configured by machine type)                        
x86 core2duo-v1           Intel(R) Core(TM)2 Duo CPU     T7700  @ 2.40GHz           
x86 coreduo               (alias configured by machine type)                        
x86 coreduo-v1            Genuine Intel(R) CPU           T2600  @ 2.16GHz           
x86 kvm32                 (alias configured by machine type)                        
x86 kvm32-v1              Common 32-bit KVM processor                               
x86 kvm64                 (alias configured by machine type)                        
x86 kvm64-v1              Common KVM processor                                      
x86 n270                  (alias configured by machine type)                        
x86 n270-v1               Intel(R) Atom(TM) CPU N270   @ 1.60GHz                    
x86 pentium               (alias configured by machine type)                        
x86 pentium-v1                                                                      
x86 pentium2              (alias configured by machine type)                        
x86 pentium2-v1                                                                     
x86 pentium3              (alias configured by machine type)                        
x86 pentium3-v1                                                                     
x86 phenom                (alias configured by machine type)                        
x86 phenom-v1             AMD Phenom(tm) 9550 Quad-Core Processor                   
x86 qemu32                (alias configured by machine type)                        
x86 qemu32-v1             QEMU Virtual CPU version 2.5+                             
x86 qemu64                (alias configured by machine type)                        
x86 qemu64-v1             QEMU Virtual CPU version 2.5+                             
x86 base                  base CPU model type with no features enabled              
x86 host                  KVM processor with all supported host features            
x86 max                   Enables all features supported by the accelerator in the current host

Recognized CPUID flags:
  3dnow 3dnowext 3dnowprefetch abm ace2 ace2-en acpi adx aes amd-no-ssb
  amd-ssbd amd-stibp apic arat arch-capabilities avic avx avx2
  avx512-4fmaps avx512-4vnniw avx512-bf16 avx512-fp16 avx512-vp2intersect
  avx512-vpopcntdq avx512bitalg avx512bw avx512cd avx512dq avx512er avx512f
  avx512ifma avx512pf avx512vbmi avx512vbmi2 avx512vl avx512vnni bmi1 bmi2
  bus-lock-detect cid cldemote clflush clflushopt clwb clzero cmov
  cmp-legacy core-capability cr8legacy cx16 cx8 dca de decodeassists ds
  ds-cpl dtes64 erms est extapic f16c flushbyasid fma fma4 fpu fsgsbase
  fsrm full-width-write fxsr fxsr-opt gfni hle ht hypervisor ia64 ibpb ibrs
  ibrs-all ibs intel-pt intel-pt-lip invpcid invtsc kvm-asyncpf
  kvm-asyncpf-int kvm-hint-dedicated kvm-mmu kvm-msi-ext-dest-id
  kvm-nopiodelay kvm-poll-control kvm-pv-eoi kvm-pv-ipi kvm-pv-sched-yield
  kvm-pv-tlb-flush kvm-pv-unhalt kvm-steal-time kvmclock kvmclock
  kvmclock-stable-bit la57 lahf-lm lbrv lm lwp mca mce md-clear mds-no
  misalignsse mmx mmxext monitor movbe movdir64b movdiri mpx msr mtrr
  nodeid-msr npt nrip-save nx osvw pae pat pause-filter pbe pcid pclmulqdq
  pcommit pdcm pdpe1gb perfctr-core perfctr-nb pfthreshold pge phe phe-en
  pks pku pmm pmm-en pn pni popcnt pschange-mc-no pse pse36 rdctl-no rdpid
  rdrand rdseed rdtscp rsba rtm sep serialize sha-ni skinit
  skip-l1dfl-vmentry smap smep smx spec-ctrl split-lock-detect ss ssb-no
  ssbd sse sse2 sse4.1 sse4.2 sse4a ssse3 stibp svm svm-lock svme-addr-chk
  syscall taa-no tbm tce tm tm2 topoext tsc tsc-adjust tsc-deadline
  tsc-scale tsx-ctrl tsx-ldtrk umip v-vmsave-vmload vaes vgif virt-ssbd
  vmcb-clean vme vmx vmx-activity-hlt vmx-activity-shutdown
  vmx-activity-wait-sipi vmx-apicv-register vmx-apicv-vid vmx-apicv-x2apic
  vmx-apicv-xapic vmx-cr3-load-noexit vmx-cr3-store-noexit
  vmx-cr8-load-exit vmx-cr8-store-exit vmx-desc-exit vmx-encls-exit
  vmx-entry-ia32e-mode vmx-entry-load-bndcfgs vmx-entry-load-efer
  vmx-entry-load-pat vmx-entry-load-perf-global-ctrl vmx-entry-load-pkrs
  vmx-entry-load-rtit-ctl vmx-entry-noload-debugctl vmx-ept vmx-ept-1gb
  vmx-ept-2mb vmx-ept-advanced-exitinfo vmx-ept-execonly vmx-eptad
  vmx-eptp-switching vmx-exit-ack-intr vmx-exit-clear-bndcfgs
  vmx-exit-clear-rtit-ctl vmx-exit-load-efer vmx-exit-load-pat
  vmx-exit-load-perf-global-ctrl vmx-exit-load-pkrs
  vmx-exit-nosave-debugctl vmx-exit-save-efer vmx-exit-save-pat
  vmx-exit-save-preemption-timer vmx-flexpriority vmx-hlt-exit vmx-ins-outs
  vmx-intr-exit vmx-invept vmx-invept-all-context vmx-invept-single-context
  vmx-invept-single-context vmx-invept-single-context-noglobals
  vmx-invlpg-exit vmx-invpcid-exit vmx-invvpid vmx-invvpid-all-context
  vmx-invvpid-single-addr vmx-io-bitmap vmx-io-exit vmx-monitor-exit
  vmx-movdr-exit vmx-msr-bitmap vmx-mtf vmx-mwait-exit vmx-nmi-exit
  vmx-page-walk-4 vmx-page-walk-5 vmx-pause-exit vmx-ple vmx-pml
  vmx-posted-intr vmx-preemption-timer vmx-rdpmc-exit vmx-rdrand-exit
  vmx-rdseed-exit vmx-rdtsc-exit vmx-rdtscp-exit vmx-secondary-ctls
  vmx-shadow-vmcs vmx-store-lma vmx-true-ctls vmx-tsc-offset
  vmx-unrestricted-guest vmx-vintr-pending vmx-vmfunc
  vmx-vmwrite-vmexit-fields vmx-vnmi vmx-vnmi-pending vmx-vpid
  vmx-wbinvd-exit vmx-xsaves vmx-zero-len-inject vpclmulqdq waitpkg
  wbnoinvd wdt x2apic xcrypt xcrypt-en xgetbv1 xop xsave xsavec xsaveerptr
  xsaveopt xsaves xstore xstore-en xtpr

The number of supported flags grew enormously compared to the old versions of QEMU and in fact, they include almost all available CPU flags. The supported CPUs are also several times more than before! The above list of supported CPUs means the virtual guest machine could use one of them and the guest operating system will have all the flags the CPU supports. In fact, the guest virtual system will report to the OS it has the selected CPU from the list above.
Keep on reading!

Expand disk and the root partition of the QEMU virtual server

This article is to show how easy is to grow the size of QEMU virtual disk and its partitions (along with ext4 file system). Of course, you can use this article as an example of expanding the partitions of a physical disk.

Our setup is a QEMU virtual server using a raw image of 20G and the steps are as follow:

  1. Stop the virtual server
  2. Resize with qemu-img the raw image of the virtual server
  3. Start the virtual server
  4. Get a root ssh shell (probably by using openssh)
  5. Use parted to resize the partition (and fix the GPT of the disk – not the disk is larger, so the GPT table need fixing).
  6. Use resize2fs to resize the

STEP 1) Power off your virtual server.

The best way is to power it off within the server with the “poweroff” command. Be careful to check whether the host server killed the QEMU process. It is almost certain if the VNC port is released, the QEMU process has been exited.
If you use virsh (i.e. libvirt), you may execute:

virsh shutdown my-private-vm-01
virsh destroy my-private-vm-01

The destroy command ensures there is no QEMU process, which still operates over the image disk file. But it is dangerous for your data if you issue it on a running virtual server, because it may lose the unsaved data.
If you use QEMU manually wait for the process to exit or if you have enabled the management console connect to it using telnet and just quit – this will destory the QEMU virtual server process – again be careful with unsaved data.

[root@lsrv1 ~]# ps axuf|grep qemu
root     15575  2.3 50.1 13061032 8112212 ?    Sl   May08 1522:27 qemu-system-x86_64 -enable-kvm -smp 4,maxcpus=8 -daemonize -vnc :30 -cdrom /mnt/vm/isos/CentOS-7-x86_64-Minimal-1810.iso -drive file=/mnt/vm/images/templatesrv-wordpress.bin,cache=none,aio=threads,if=virtio -boot c -net nic,model=virtio,macaddr=00:00:00:00:00:30 -net tap,ifname=tap30,script=no,downscript=no -balloon virtio -m 8144 -monitor telnet:127.0.0.1:5830,server,nowait
[root@srv-host ~]# telnet 127.0.0.1 5830
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
QEMU 2.0.0 monitor - type 'help' for more information
(qemu) q
Connection closed by foreign host.

If you use a web interface (for example WebVirtMgr) check whether the virtual server is in power-off state.

STEP 2) Resize the image file of the virtual server.

Find where are located the virtual servers’ image files in your installation and use qemu-img. We want to increse the size with 174GB to 200GB.

qemu-img resize my-private-vm-01.img +174GB

STEP 3) Start your server.

Start your server by issuing a command with virsh or QEMU (qemu-system-x86_64) or from a web interface if use one (like WebVirtMgr).
*virsh and libvirt:

virsh start my-private-vm-01

*Manual start of QEMU emulator – qemu-system-x86_64:

qemu-system-x86_64 -enable-kvm -smp 4,maxcpus=8 -daemonize -vnc :30 -cdrom /mnt/vm/isos/CentOS-7-x86_64-Minimal-1810.iso -drive file=/mnt/vm/images/templatesrv-wordpress.bin,cache=none,aio=threads,if=virtio -boot c -net nic,model=virtio,macaddr=00:00:00:00:00:30 -net tap,ifname=tap30,script=no,downscript=no -balloon virtio -m 8144 -monitor telnet:127.0.0.1:5830,server,nowait

Or just use the web browser and start the virtual server from WebVirtMgr if it is what you use.

STEP 4) Open a shell to your server.

We use openssh client to connect to our server.

STEP 5) Use parted to resize the partition.

The program “parted” will report that the partition table does not use the whole available disk, which is perfectly normal because we’ve just increased the disk size. Just confirm to fix the GPT partition table:

parted /dev/vda
GNU Parted 3.2
Using /dev/vda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Warning: Not all of the space available to /dev/vda appears to be used, you can fix the GPT to use all of the space (an extra 367001600 blocks) or continue with the current
setting? 
Fix/Ignore? Fix                                                           
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 215GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2097kB  1049kB                        bios_grub
 2      2097kB  4096MB  4094MB  linux-swap(v1)
 3      4096MB  24.0GB  19.9GB  ext4

(parted) resizepart 3 -1                                                  
Warning: Partition /dev/vda3 is being used. Are you sure you want to continue?
parted: invalid token: -1
Yes/No? Yes
End?  [24.0GB]? -1
(parted) p
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 215GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2097kB  1049kB                        bios_grub
 2      2097kB  4096MB  4094MB  linux-swap(v1)
 3      4096MB  215GB   211GB   ext4

(parted) q                                                                
Information: You may need to update /etc/fstab.

There is a warning the partition is in use but it is perfectly OK to continue.
*parted: reports “invalid token: -1”, but it is accepted for the “End” parameter.

STEP 6) Resize the ext4 file system online

Use the tool resize2fs to resize EXT4.

resize2fs /dev/vda3
resize2fs 1.42.13 (17-May-2015)
Filesystem at /dev/vda3 is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 13
The filesystem on /dev/vda3 is now 51428620 (4k) blocks long.

To check the resize operation:

srv1-vm ~ # dmesg|grep EXT4
[  449.330140] EXT4-fs (vda3): resizing filesystem from 4859392 to 51428620 blocks
[  449.936044] EXT4-fs (vda3): resized filesystem to 51428620
srv1-vm ~ # df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           798M  3.6M  795M   1% /run
/dev/vda3       193G  5.7G  180G   4% /
tmpfs           3.9G  196K  3.9G   1% /dev/shm

Output log of the whole resize operation

srv-vm1 ~ # parted /dev/vda
GNU Parted 3.2
Using /dev/vda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 26.8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2097kB  1049kB                        bios_grub
 2      2097kB  4096MB  4094MB  linux-swap(v1)
 3      4096MB  24.0GB  19.9GB  ext4

(parted) q                                                                
srv-vm1 ~ # poweroff
Connection to srv-vm1 closed by remote host.
Connection to srv-vm1 closed.
myuser@gw1:~$ sshh srv1-host
srv1-host ~ # cd /mnt/vm/images
srv1-host images # qemu-img resize srv-vm1.img +174GB
Image resized.
srv1-host images # logout
Connection to srv1-host closed.
myuser@gw1:~$ sshh srv-vm1
srv-vm1 ~ # df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           798M  3.5M  795M   1% /run
/dev/vda3        19G  5.7G   12G  33% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           798M     0  798M   0% /run/user/0
srv-vm1 ~ # parted /dev/vda
GNU Parted 3.2
Using /dev/vda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Warning: Not all of the space available to /dev/vda appears to be used, you can fix the GPT to use all of the space (an extra 367001600 blocks) or continue with the current
setting? 
Fix/Ignore? Fix                                                           
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 215GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2097kB  1049kB                        bios_grub
 2      2097kB  4096MB  4094MB  linux-swap(v1)
 3      4096MB  24.0GB  19.9GB  ext4

(parted) q
Information: You may need to update /etc/fstab.

srv-vm1 ~ # parted /dev/vda                                           
GNU Parted 3.2
Using /dev/vda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) resizepart 3 -1                                                  
Warning: Partition /dev/vda3 is being used. Are you sure you want to continue?
parted: invalid token: -1                                                 
Yes/No? Yes                                                               
End?  [24.0GB]? -1                                                        
(parted) p                                                                
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 215GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2097kB  1049kB                        bios_grub
 2      2097kB  4096MB  4094MB  linux-swap(v1)
 3      4096MB  215GB   211GB   ext4

(parted) q                                                                
Information: You may need to update /etc/fstab.

srv-vm1 ~ # resize2fs /dev/vda3
resize2fs 1.42.13 (17-May-2015)
Filesystem at /dev/vda3 is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 13
The filesystem on /dev/vda3 is now 51428620 (4k) blocks long.

srv-vm1 ~ # dmesg|grep EXT4
[  449.330140] EXT4-fs (vda3): resizing filesystem from 4859392 to 51428620 blocks
[  449.936044] EXT4-fs (vda3): resized filesystem to 51428620

srv-vm1 ~ # touch /forcefsck
srv-vm1 ~ # reboot

We rebooted the virtual machine with force check for precaution, but it is not reqiured.

Bonus – physical disks setup

Probably you would need an additional first step of copying your old disk to the new disk – basically, there are two ways to do it:

  • blind copy everything with a hardware or the Linux “dd” command.
  • use gparted to copy the GPT table and the partitions to the new disk

QEMU full virtualization – CPU emulations (enable/disable CPU flags/instruction sets) of QEMU 2.0.0

After the two QEMU full virtualization howtos

You can use QEMU with a nearly native full virtualization. Here are some important tips for the guest CPU to consider when using QEMU directly (without any virtualization manager like virt-manager, libvirt and so on).

TIP 1)Choose your host CPU emulation

You can see what options are available for host emulation with:

srv@local ~$ qemu-system-x86_64 -cpu help

x86           qemu64  QEMU Virtual CPU version 2.0.0                  
x86           phenom  AMD Phenom(tm) 9550 Quad-Core Processor         
x86         core2duo  Intel(R) Core(TM)2 Duo CPU     T7700  @ 2.40GHz 
x86            kvm64  Common KVM processor                            
x86           qemu32  QEMU Virtual CPU version 2.0.0                  
x86            kvm32  Common 32-bit KVM processor                     
x86          coreduo  Genuine Intel(R) CPU           T2600  @ 2.16GHz 
x86              486                                                  
x86          pentium                                                  
x86         pentium2                                                  
x86         pentium3                                                  
x86           athlon  QEMU Virtual CPU version 2.0.0                  
x86             n270  Intel(R) Atom(TM) CPU N270   @ 1.60GHz          
x86           Conroe  Intel Celeron_4x0 (Conroe/Merom Class Core 2)   
x86           Penryn  Intel Core 2 Duo P9xxx (Penryn Class Core 2)    
x86          Nehalem  Intel Core i7 9xx (Nehalem Class Core i7)       
x86         Westmere  Westmere E56xx/L56xx/X56xx (Nehalem-C)          
x86      SandyBridge  Intel Xeon E312xx (Sandy Bridge)                
x86          Haswell  Intel Core Processor (Haswell)                  
x86       Opteron_G1  AMD Opteron 240 (Gen 1 Class Opteron)           
x86       Opteron_G2  AMD Opteron 22xx (Gen 2 Class Opteron)          
x86       Opteron_G3  AMD Opteron 23xx (Gen 3 Class Opteron)          
x86       Opteron_G4  AMD Opteron 62xx class CPU                      
x86       Opteron_G5  AMD Opteron 63xx class CPU                      
x86             host  KVM processor with all supported host features (only available in KVM mode)

Recognized CPUID flags:
  pbe ia64 tm ht ss sse2 sse fxsr mmx acpi ds clflush pn pse36 pat cmov mca pge mtrr sep apic cx8 mce pae msr tsc pse de vme fpu
  hypervisor rdrand f16c avx osxsave xsave aes tsc-deadline popcnt movbe x2apic sse4.2|sse4_2 sse4.1|sse4_1 dca pcid pdcm xtpr cx16 fma cid ssse3 tm2 est smx vmx ds_cpl monitor dtes64 pclmulqdq|pclmuldq pni|sse3
  smap adx rdseed rtm invpcid erms bmi2 smep avx2 hle bmi1 fsgsbase
  3dnow 3dnowext lm|i64 rdtscp pdpe1gb fxsr_opt|ffxsr mmxext nx|xd syscall
  perfctr_nb perfctr_core topoext tbm nodeid_msr tce fma4 lwp wdt skinit xop ibs osvw 3dnowprefetch misalignsse sse4a abm cr8legacy extapic svm cmp_legacy lahf_lm
  pmm-en pmm phe-en phe ace2-en ace2 xcrypt-en xcrypt xstore-en xstore
  kvm_pv_unhalt kvm_pv_eoi kvm_steal_time kvm_asyncpf kvmclock kvm_mmu kvm_nopiodelay kvmclock
  pfthreshold pause_filter decodeassists flushbyasid vmcb_clean tsc_scale nrip_save svm_lock lbrv npt

The host server will expose different instruction set to the guest server (the emulated CPU), so when you choose your host to emulate for example “qemu64” with:

qemu-system-x86_64 -enable-kvm  \
-cpu qemu64,+ssse3,+sse4.1,+sse4.2,+x2apic -smp 2,maxcpus=8 \
-daemonize -vnc 192.168.0.10:1 \
-drive file=/mnt/storage/qemu/roofs/srv_virt.qcow2,index=0,cache=none,aio=threads,if=virtio \
-cdrom /mnt/storage/images/install-amd64-minimal-20140327.iso -boot d \
-net nic,model=virtio,macaddr=$(< /sys/class/net/macvtap0/address) \
-net tap,fd=3 3<>/dev/tap$(< /sys/class/net/macvtap0/ifindex) \
-balloon virtio -m 8192 \
-monitor telnet:127.0.0.1:5801,server,nowait -writeconfig /opt/qemu/config/srv_virt.qcow2.conf

The guest server (the virtual machine) will have the following CPU and instruction set:

vendor_id       : GenuineIntel
cpu family      : 6
model           : 6
model name      : QEMU Virtual CPU version 2.0.0
stepping        : 3
microcode       : 0x1
cpu MHz         : 2133.408
cache size      : 4096 KB
fpu             : yes
fpu_exception   : yes
cpuid level     : 4
wp              : yes
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni ssse3 cx16 sse4_1 x2apic popcnt hypervisor lahf_lm
bogomips        : 4266.81
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 6
model name      : QEMU Virtual CPU version 2.0.0
stepping        : 3
microcode       : 0x1
cpu MHz         : 2133.408
cache size      : 4096 KB
fpu             : yes
fpu_exception   : yes
cpuid level     : 4
wp              : yes
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni ssse3 cx16 sse4_1 x2apic popcnt hypervisor lahf_lm
bogomips        : 4266.81
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

A base set of instructions (flags) with the explicitly included by our command with

+ssse3,+sse4.1,+sse4.2,+x2apic

. The format:

-cpu qemu64,+ssse3,+sse4.1,+sse4.2,+x2apic

IF you choose the last option:

-cpu host

the host server will try to emulate and expose to the virtual machine its processor and all flags:

qemu-system-x86_64 -enable-kvm  \
-cpu host -smp 2,maxcpus=8 \
-daemonize -vnc 192.168.0.10:1 \
-drive file=/mnt/storage/qemu/roofs/srv_virt.qcow2,index=0,cache=none,aio=threads,if=virtio \
-cdrom /mnt/storage/images/install-amd64-minimal-20140327.iso -boot d \
-net nic,model=virtio,macaddr=$(< /sys/class/net/macvtap0/address) \
-net tap,fd=3 3<>/dev/tap$(< /sys/class/net/macvtap0/ifindex) \
-balloon virtio -m 8192 \
-monitor telnet:127.0.0.1:5801,server,nowait -writeconfig /opt/qemu/config/srv_virt.qcow2.conf

The virtual machine:

[root@vm0 ~]# cat /proc/cpuinfo 
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 44
model name      : Intel(R) Xeon(R) CPU           E5606  @ 2.13GHz
stepping        : 2
microcode       : 0x1
cpu MHz         : 2133.408
cache size      : 8192 KB
fpu             : yes
fpu_exception   : yes
cpuid level     : 11
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes hypervisor lahf_lm tsc_adjust
bogomips        : 4266.81
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

The host server:

[root@srv0 ~]# cat /proc/cpuinfo 
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 44
model name      : Intel(R) Xeon(R) CPU           E5606  @ 2.13GHz
stepping        : 2
microcode       : 0x14
cpu MHz         : 1200.000
cache size      : 8192 KB
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 11
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm tpr_shadow vnmi flexpriority ept vpid dtherm arat
bogomips        : 4266.41
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

TIP 2) Disable certain CPU flags (instruction sets)

As you can see with the above CPU options you can hide your exact type of processor and you could disable specific CPU flags (instruction sets) to the user’s virtual machine. The purpose is up to the user and one reason for example could be not offer “avx” (or “avx2”) to discourage crypto mining with the virtual machine. Or limit the SSE2/3/4/4.2/SSSE3 and other “multimedia” instruction sets to discourage video encoding and so on. Probably you would like to be used It’s up to you what to offer to the virtual machine user.
Here is the command to emulate the host CPU with all supported flags but disable “sse4.1” and “sse4.2”:
The syntax:

-cpu host,-sse4.1,-sse4.2

And the qemu command is:

qemu-system-x86_64 -enable-kvm \
-cpu host,-sse4.1,-sse4.2 \
-smp 2,maxcpus=8 \
-daemonize -vnc 192.168.0.10:1 \
-drive file=/mnt/storage/qemu/roofs/srv_virt.qcow2,index=0,cache=none,aio=threads,if=virtio \
-cdrom /mnt/storage/images/install-amd64-minimal-20140327.iso -boot d \
-net nic,model=virtio,macaddr=$(< /sys/class/net/macvtap0/address) \
-net tap,fd=3 3<>/dev/tap$(< /sys/class/net/macvtap0/ifindex) \
-balloon virtio -m 8192 \
-monitor telnet:127.0.0.1:5801,server,nowait -writeconfig /opt/qemu/config/srv_virt.qcow2.conf

So the virtual machine lacks the disabled flags:

flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl pni pclmulqdq ssse3 cx16 pcid x2apic popcnt tsc_deadline_timer aes hypervisor lahf_lm tsc_adjust

TIP 3) Number of virtual processors in the virtual machine

The syntax

-smp 2,maxcpus=8

of the qemu command:

qemu-system-x86_64 -enable-kvm  -cpu host \
-smp 2,maxcpus=8 \
-daemonize -vnc 192.168.0.10:1 \
-drive file=/mnt/storage/qemu/roofs/srv_virt.qcow2,index=0,cache=none,aio=threads,if=virtio \
-cdrom /mnt/storage/images/install-amd64-minimal-20140327.iso -boot d \
-net nic,model=virtio,macaddr=$(< /sys/class/net/macvtap0/address) \
-net tap,fd=3 3<>/dev/tap$(< /sys/class/net/macvtap0/ifindex) \
-balloon virtio -m 8192 \
-monitor telnet:127.0.0.1:5801,server,nowait -writeconfig /opt/qemu/config/srv_virt.qcow2.conf

will start up the virtual machine with 2 processors and you can hot add a cpu up to 8 total in any time you want with the management console listening on 127.0.0.1:5801.

Howto do QEMU full virtualization with bridged networking

This howto rather continues the previous one “Howto do QEMU full virtualization with MacVTap networking” with the exception it will be showed how to use a classic setup of the networking – the use of bridge device. Because this setup requires a specific configuration for every Linux distro if we do not just add the bridge manually it is separated in this howto. For the clear and full howto, we would repeat the two first steps just to enable this howto to be independent of the original one mentioned above.
So use full virtualization under Linux you can use QEMU and no other library or manager like virt-manager. QEMU is simple enough and with a couple of parameters to it, you can start KVM virtual machines with near-native performance. To use KVM you must enable it in the BIOS of your server (or desktop machine).

Here are the main steps:

STEP 1) Enable KVM in the BIOS

  • For Intel machine you must find option Intel Virtualization Technology (or Intel VT-x) probably in BIOS menu of Chipset, Advanced CPU Configuration or other.
  • For AMD machine the virtualization cannot be disabled so it is enabled by default, but you can check for additional virtualization features to enable like Virtualization Extensions, Vanderpool and other.
  • Enable also additional features – Intel VT-d or AMD IOMMU, if they are available.

Reboot your machine and check if the KVM is supported:

srv@local ~$ cat /proc/cpuinfo |grep -E "vmx|svm"
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap xsaveopt dtherm ida arat pln pts
...

STEP 2) Install QEMU

Under CentOS 7 you can just install couple of packets – that’s all you need:

yum install -y qemu qemu-common qemu-img qemu-kvm-common qemu-system-x86 qemu-user bridge-utils

Or under Ubuntu

apt-get install qemu-kvm bridge-utils

STEP 3) Prepare the network 1 – the bridge device

Under CentOS 7 add the following configuration file

/etc/sysconfig/network-scripts/ifcfg-br0

with the content of

DEVICE=br0
TYPE=Bridge
BOOTPROTO=none
ONBOOT=yes
IPADDR0=192.168.0.1
PREFIX0=24
#GATEWAY0=192.168.0.1
NETMASK=255.255.255.0
IPV6_FAILURE_FATAL=no
NM_CONTROLLED=no
ZONE=public

If you want to use a real IP set to your virtual machine, you should set a real IP here and uncomment the GATEWAY0 with the real gateway IP. If real IP is used then you should include the main Internet network interface to the bridge by adding at the end of the configuration file /etc/sysconfig/network-scripts/ifcfg-eth0 (if eth0 is your network interface):

...
BRIDGE=br0

And restart the network

srv@local ~$ systemctl restart network

Under Ubuntu add to the file

/etc/network/interfaces

the following:

# Bridge
auto br0
iface br0 inet static
  address 192.168.0.10
  netmask 255.255.255.0
#  gateway 192.168.0.1
  bridge_ports none
  bridge_stp off
  bridge_fd 0
  bridge_maxwait 0

If you want to use real IP set to your virtual machine, you should set a real IP here and uncomment the GATEWAY0 with the real gateway IP and replace the “none” in the option “bridge_ports” with the name of your main Internet network interface. For example:

  ...
  bridge_ports eth0
  ...

And restart the network

srv@local ~$ /etc/init.d/networking restart

Or we can add the bridge device manually:

srv@local ~$ brctl addbr br0
srv@local ~$ ip link set dev br0 up
srv@local ~$ ip addr add 192.168.0.1/24 dev br0

If we use real IP we have to add the main Internet network interface to the bridge, so when you set up the network in our virtual machine with a real IP it will work with no more additional configurations, but if we use a local IPs like our setup here and we want to have Internet in our virtual machine we must enable masquerade and linux routing. You can do it with:

srv@local ~$ echo 1 > /proc/sys/net/ipv4/ip_forward
#NAT with firewalld
srv@local ~$ firewall-cmd --add-masquerade --permanent
#NAT with iptables
srv@local ~$ iptables -t nat -A POSTROUTING -o br0 -j MASQUERADE

Use either firewalld or iptables setup, depends on your system configuration, just check if firewalld is running with:

srv@local ~$ firewall-cmd --list-all

If you receive an error, saying command not found or firewalld is not running, you should use the “NAT with iptables”
So the network is ready!

STEP 4) Prepare the network 2 – the tun/tap for the virtual machine

After we have added a bridge device tun/tap device, which will be used for the QEMU virtual machine must be added:

srv@local ~$ ip tuntap add tap0 mode tap
srv@local ~$ brctl addif br0 tap0

STEP 5) Create a QEMU hard drive

Create a 100G file

srv@local ~$ cd /mnt/storage1/disks/
srv@local ~$ qemu-img create -f qcow2 vm_harddisk.qcow2 100G
Formatting 'vm_harddisk.qcow2', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 lazy_refcounts=off 

or you can enable encryption (but on every start of your virtual machine you must set the key through the qemu console to start the virtual machine):

srv@local ~$ qemu-img create -f qcow2 vm_harddisk_e.bin -o encryption 100G
Formatting 'vm_harddisk_e.bin', fmt=qcow2 size=107374182400 encryption=on cluster_size=65536 lazy_refcounts=off

STEP 6) Boot up the QEMU KVM virtual server

srv@local ~$ qemu-system-x86_64 -enable-kvm -cpu host -smp 4 -runas qemu -daemonize -vnc 127.0.0.1:1 \
-drive file=/mnt/storage1/disks/vm_harddisk.qcow2,index=0,cache=none,aio=threads,if=virtio \
-boot d -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0 \
-balloon virtio -m 2048 -monitor telnet:127.0.0.1:5801,server,nowait

The command above will :

  • “-enable-kvm” – enable the KVM – full virtualization with near native performance
  • “-cpu host” – will expose all supported host CPU features (only supported in KVM mode)
  • “-smp 4” – sets 4 processors to the virtual machine
  • “-daemonize” – start the command in daemon mode
  • “-runas qemu” – run under user, you can run thwo whole virtual machine from a user created especially for it, no need to run it with root, even it is recommended to run it under unprivileged user
  • “-vnc 192.168.1.10:1” – start a VNC server on this IP:PORT = 192.168.1.10:5901, the IP must present on the server or you can use 0.0.0.0:1 for 0.0.0.0:5901, but in every situation limit the access by a firewall
  • “-drive file=/mnt/storage1/disks/vm_harddisk.qcow2,index=0,cache=none,aio=threads,if=virtio” – set the main hard drive of the system
  • “-boot d” – boot from the first hard drive
  • “-net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0” – set the network interface using the tap device created by STEP 3) and STEP 4)
  • “-balloon virtio” – use balloon driver to be able to hot add or hot remove RAM (newer version this option is depricated and it can be skipped)
  • “-m 2048” – set virtual RAM size to megs
  • “-monitor telnet:127.0.0.1:5801,server,nowait” – set the management console for the this virtual server, you can connect with:
    srv@local ~$ telnet 127.0.0.1 5801
    Trying 127.0.0.1...
    Connected to 127.0.0.1.
    Escape character is '^]'.
    QEMU 2.0.0 monitor - type 'help' for more information
    (qemu) 
    <Press "CTRL+]">
    telnet> Connection closed.
    

    When quitting the management console you must NOT exit the console with quite/exit or CTRL+d, becuause it will terminate the virtual server, you must disconnect from the console with “CTRL+]” and then quit the telnet shell. With the console you can hot add/remove CPU, RAM, network cards, pci devices, harddrives, start/stop/shutdown/reset the virtual machine and a lot more.

Boot the virtual machine from the hard drive given by “-drive” with network “-net” (couple of options), the RAM uses baloon memory and could be adjusted on-the-fly and sets the vncserver to listen for connection on port IP:port = 192.168.1.1:5901 (probably you’ll want to change this with a the real IP of your server, but be careful to set up a firewall rule for 5901 – the vnc port) and a management console listening on IP:port 127.0.0.1:5801.

* Boot the virtual server from a virtual CD/DVD

Probably the first time booting you might need to boot from an installation disk, this could be done by the following command:

srv@local ~$ qemu-system-x86_64 -enable-kvm -cpu host -smp 4 -runas qemu -daemonize -vnc 127.0.0.1:1 -cdrom /mnt/storage1/disks/isos/CentOS-7-x86_64-NetInstall-1708.iso -boot c -drive file=/mnt/storage1/disks/vm_harddisk.qcow2,index=0,cache=none,aio=threads,if=virtio -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0 -balloon virtio -m 2048 -monitor telnet:127.0.0.1:5805,server,nowait

The changes:

  1. “-boot c” – First boot device is now CD/DVD. “c” is for CD, “d” is for disk
  2. “-cdrom /mnt/storage1/disks/isos/CentOS-7-x86_64-NetInstall-1708.iso” – added the installation disk to the virtual machine

A newer QEMU version may need adding “script=no,downscript=no” to the tap0 interface!

srv@local ~$ qemu-system-x86_64 -enable-kvm -cpu host -smp 4 -runas qemu -daemonize -vnc 127.0.0.1:1 -cdrom /mnt/storage1/disks/isos/CentOS-7-x86_64-NetInstall-1708.iso -boot c -drive file=/mnt/storage1/disks/vm_harddisk.qcow2,index=0,cache=none,aio=threads,if=virtio -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0,script=no,downscript=no -m 2048 -monitor telnet:127.0.0.1:5805,server,nowait

Howto do QEMU full virtualization with MacVTap networking

To use full virtualization under linux you can use QEMU and no other library or manager like virt-manager. QEMU is simple enough and with couple of parameters to it you can start KVM virtual machines with near native performance. To use KVM you must enable it in the BIOS of your server (or desktop machine).

Here a several simple step to start a KVM virtual server:

STEP 1) Enable KVM in the BIOS

  • For Intel machine you must find option Intel Virtualization Technology (or Intel VT-x) probably in BIOS menu of Chipset, Advanced CPU Configuration or other.
  • For AMD machine the virtualization cannot be disabled so it is enabled by default, but you can check for additional virtualization features to enable like Virtualization Extensions, Vanderpool and other.
  • Enable also additional features – Intel VT-d or AMD IOMMU, if they are available.

Reboot your machine and check if the KVM is supported:

srv@local ~$ cat /proc/cpuinfo |grep -E "vmx|svm"
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap xsaveopt dtherm ida arat pln pts
...

STEP 2) Install QEMU

Under CentOS 7 you can just install couple of packets – that’s all you need:

yum install -y qemu qemu-common qemu-img qemu-kvm-common qemu-system-x86 qemu-user bridge-utils

Or under Ubuntu

apt-get install qemu-kvm bridge-utils

STEP 3) Prepare the network

srv@local ~$ ip link add link enp8s0f1 name macvtap0 type macvtap mode bridge
srv@local ~$ ip link set macvtap0 up

Here we create a macvtap0 device in bridge mode, these commands will create a tap device bridged to the network interface “enp8s0f1” (in our case, you must replace this device name with the device name you want to bridge your virtual machine network, probably the main interface of your server/desktop machine?). Only these two commands are needed, no other devices or network reload is needed.
The device will show in “ip addr”

7: macvtap0@enp8s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether 2e:51:7e:bb:44:ee brd ff:ff:ff:ff:ff:ff
    inet6 fe80::2c51:7eff:febb:44ee/64 scope link 
       valid_lft forever preferred_lft forever

This setup could expose the MAC address of the macvtap device to the router port connected

STEP 4) Create a QEMU hard drive

Create a 100G file

srv@local ~$ cd /mnt/storage1/disks/
srv@local ~$ qemu-img create -f qcow2 vm_harddisk.qcow2 100G
Formatting 'vm_harddisk.qcow2', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 lazy_refcounts=off 

or you can enable encryption (but on every start of your virtual machine you must set the key through the qemu console to start the virtual machine):

srv@local ~$ qemu-img create -f qcow2 vm_harddisk_e.bin -o encryption 100G
Formatting 'vm_harddisk_e.bin', fmt=qcow2 size=107374182400 encryption=on cluster_size=65536 lazy_refcounts=off

STEP 5) Boot up the QEMU KVM virtual server

srv@local ~$ qemu-system-x86_64 -enable-kvm -cpu host -smp 4 -runas qemu -daemonize -vnc 127.0.0.1:1 \
-drive file=/mnt/storage1/disks/vm_harddisk.qcow2,index=0,cache=none,aio=threads,if=virtio \
-boot d -net nic,model=virtio,macaddr=$(cat /sys/class/net/macvtap0/address) \
-net tap,fd=3 3<>/dev/tap$(cat /sys/class/net/macvtap0/ifindex) \
-balloon virtio -m 2048 -monitor telnet:127.0.0.1:5801,server,nowait

The command above will :

  • “-enable-kvm” – enable the KVM – full virtualization with near native performance
  • “-cpu host” – will expose all supported host CPU features (only supported in KVM mode)
  • “-smp 4” – sets 4 processors to the virtual machine
  • “-daemonize” – start the command in daemon mode
  • “-runas qemu” – run under user, you can run thwo whole virtual machine from a user created especially for it, no need to run it with root, even it is recommended to run it under unprivileged user
  • “-vnc 192.168.1.10:1” – start a VNC server on this IP:PORT = 192.168.1.10:5901, the IP must present on the server or you can use 0.0.0.0:1 for 0.0.0.0:5901, but in every situation limit the access by a firewall
  • “-drive file=/mnt/storage1/disks/vm_harddisk.qcow2,index=0,cache=none,aio=threads,if=virtio” – set the main hard drive of the system
  • “-boot d” – boot from the first hard drive
  • “-net nic,model=virtio,macaddr=$(cat /sys/class/net/macvtap0/address) -net tap,fd=3 3<>/dev/tap$(cat /sys/class/net/macvtap0/ifindex)” – set the network interface using the tap device created by macvtap0 device (STEP 3)
  • “-balloon virtio” – use balloon driver to be able to hot add or hot remove RAM
  • “-m 2048” – set virtual RAM size to megs
  • “-monitor telnet:127.0.0.1:5801,server,nowait” – set the management console for the this virtual server, you can connect with:
    srv@local ~$ telnet 127.0.0.1 5801
    Trying 127.0.0.1...
    Connected to 127.0.0.1.
    Escape character is '^]'.
    QEMU 2.0.0 monitor - type 'help' for more information
    (qemu) 
    <Press "CTRL+]">
    telnet> Connection closed.
    

    When quitting the management console you must NOT exit the console with quite/exit or CTRL+d, becuause it will terminate the virtual server, you must disconnect from the console with “CTRL+]” and then quit the telnet shell. With the console you can hot add/remove CPU, RAM, network cards, pci devices, harddrives, start/stop/shutdown/reset the virtual machine and a lot more.

Boot the virtual machine from the hard drive given by “-drive” with network “-net” (couple of options), the RAM uses baloon memory and could be adjusted on-the-fly and sets the vncserver to listen for connection on port IP:port = 192.168.1.1:5901 (probably you’ll want to change this with a the real IP of your server, but be careful to set up a firewall rule for 5901 – the vnc port) and a management console listening on IP:port 127.0.0.1:5801.

* Boot the virtual server from a virtual CD/DVD

Probably the first time booting you might need to boot from an installation disk, this could be done by the following command:

srv@local ~$ qemu-system-x86_64 -enable-kvm -cpu host -smp 4 -runas qemu -daemonize -vnc 127.0.0.1:1 -cdrom /mnt/storage1/disks/isos/CentOS-7-x86_64-NetInstall-1708.iso -boot c -drive file=/mnt/storage1/disks/vm_harddisk.qcow2,index=0,cache=none,aio=threads,if=virtio -net nic,model=virtio,macaddr=$(cat /sys/class/net/macvtap0/address) -net tap,fd=3 3<>/dev/tap$(cat /sys/class/net/macvtap0/ifindex) -balloon virtio -m 2048 -monitor telnet:127.0.0.1:5801,server,nowait

The changes:

  1. “-boot c” – First boot device is now CD/DVD. “c” is for CD, “d” is for disk
  2. “-cdrom /mnt/storage1/disks/isos/CentOS-7-x86_64-NetInstall-1708.iso” – added the installation disk to the virtual machine