Govee WiFi Thermometer Hygrometer H5179 cannot connect to the Wi-Fi

Govee WiFi Thermometer Hygrometer (model H5179) uses 3 AA batteries, which may last year or two depending on the period of pushing the data to the severs in the Internet (i.e. using the Wi-Fi module). The device may suddenly to begin losing Wi-Fi connection or even stop connecting to it, which is an indicator that the batteries should be replaced. Despite that the battery indicator in the app is full, the batteries probably supply low voltage to the electronics in the device that’s why the Wi-Fi begins not to work properly! The device may have kept normal Bluetooth operation for several month leaving the user wondering what is going on! In fact, problems with the Wi-Fi module is the first signal for a battery replacement.
There three screenshots of a device, which works perfectly fine without the Wi-Fi. It reports via Bluetooth the temperature and keeps track of it, but cannot connect to the Wi-Fi, despite it says the Wi-Fi settings are successfully saved.

SCREENSHOT 1) In the device settings the Wi-Fi is successfully set and the device immediately tries connecting.

main menu
settings Wi-Fi successfully set

Keep on reading!

GPD Win Max2 (2023) with AMD 7840U performance with Cinebench R23

GPD Win Max2 (2023) is a cool small laptop with joysticks for gaming. It has really cool specs, which many of the laptops lack!

main menu
original photo

The model used to test here is equipped with

  • 10.1 inches display – 2560×1600 with 400 nit brightness (Supports 10-point Touch, Active Pens With Up To 4096-level pressure-sensitivity)
  • AMD Ryzen 7 7840U with integrated GPU (RDNA 3) – AMD Radeon 780M Graphics (12CU, 768 shaders)
  • 64GB RAM (LPDDR5x-7500 MT/s)
  • 2T NVME (two slots available!)
  • Oculink (SFF-8612) Port
  • 2 USB 4 ports (2 more USB 3.2 Gen1 ports are available and SD card slot reader)
  • Wi-Fi 6
  • 67Wh battery
  • and many more features.

It’s like a little beast with this CPU and the 64G RAM and adding the integrated RDNA AMD GPU, this little machine offers a gaming performance even with some of the best and resource demanding games, at present. It can run best games with 40-60 FPS on Cyberpunk 2077, Hades, Control, Rise of the Tomb Raider, Horizon Zero Dawn, The Witcher III, the latest Zelda series, Metroid Prime Remastered, in addition to many old games with simulation of PS1/PS2/PS3, Nintendo Switch/Wii and many more. There many games tested here YouTube.

Here are Cinebench R23 test for all predefined TDP Watts with temperatures, FAN and CPU Speeds. The best TDP is probably 8 or 12 Watts max, where the CPU and the machine is not hot, so no FAN noise, at all. The single core performance is almost maxed out and the multi cores is around the half of the possible max performance of the device. The device is not reinstalled and no drivers were updated, the tests are performed on the stock device with the original software and drivers. All tests are performed on battery at least half full. Ambient temperature is 26’C and the idle temperature of the CPU is between 34-36’C. The Motion Assistant 1.1.6.2 is used to live limit the TDP.
The TDP in AMD world means the required cooling with simple words, because the product (i.e. CPU/GPU) can produce such amount of heat.

Cinebench R23 multi cores tests

It always uses all the cores for the tests. The Cinebench R23 points are in yellow, the temperatures are red and the TDP in Watts are in blue. Probably the best TDP related to the CPU performance is between 8 and 18 Watts. The minimal Cinebench R23 points for the 5 TDP are 1968 and the max are around 14000 with 35 Watts TDP. In fact, the CPU offers great performance even with TDP 12 Watts.

main menu
multicore watts temperatures points

Cinebench R23 single core

It is difficult to say how many cores are used and how many are parked, but it is between 6-12 are parked and only two are half used at most. The single core performance is much easier to represent, because it max out on the 8 Watts TDP and it does not change when the TDP are higher. That’s why even 8 Watts TDP is really good for productivity work with max single core, 1/3 performance of the multi core and temperatures and watts usage for maximum battery.

main menu
single core watts temperatures points

Keep on reading!

mdadm assembles AVAGO/LSI MegaRAID controller RAID 5 array

It is possible to read data with the software Linux raid using mdadm tool from a RAID 5 array created with the hardware raid controller AVAGO MegaRAID 9361-4i (LSI SAS3108).

main menu
mdadm E sdb

Here, how a RAID 5 array with 3 hard drives and 1 SSD ( with CacheCade in write-through mode) is assembled by the mdadm and Linux software raid:

livecd ~ # cat /proc/mdstat 
Personalities : [raid0] [raid6] [raid5] [raid4] 
md125 : active raid0 sda[0]
      937164800 blocks super external:/md127/1 1024k chunks
      
md126 : active raid5 sdb[2] sdc[1] sdd[0]
      23436722176 blocks super external:/md127/0 level 5, 1024k chunk, algorithm 2 [3/3] [UUU]
      [==============>......]  resync = 72.0% (8438937704/11718361088) finish=336.8min speed=162234K/sec
      
md127 : inactive sdb[3](S) sda[2](S) sdd[1](S) sdc[0](S)
      2100568 blocks super external:ddf
       
unused devices: <none>

Note, it is essential that the CacheCade device is in write-through mode, which means the cache device is used only for reading and the data on the RAID array is consistent and written on it. The RAID 5 array was created here – AVAGO MegaRAID SAS-9361-4i with CacheCade – create a new virtual drive RAID5 with SSD caching. It seams possible for the data to be consistent if the CacheCade is write-back mode if there were few small writes and orderly shutdown prior to the removal of the AVAGO MegaRAID 9361-4i.
So, the above devices use proprietary LSI format, but here Linux software raid supports some of them:

  • md125 – the SSD device, which is a read cache only.
  • md126 – 3 hard drives in RAID 5 array.
  • md127 – logical device, which provides transparent interface to the

The important device is md126 and can be mounted under some live Linux CD/USB. Further, the md125 is a device, which has GPT partition table with 5 partitions:

livecd ~ # parted /dev/md126 --script print
Model: Linux Software RAID Array (md)
Disk /dev/md126: 24.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name                  Flags
 1      1049kB  211MB   210MB   fat16           EFI System Partition  boot, esp
 2      211MB   1285MB  1074MB  ext4                                  msftdata
 3      1285MB  23.9TB  23.9TB  ext4                                  msftdata
 4      23.9TB  24.0TB  53.7GB  ext4                                  msftdata
 5      24.0TB  24.0TB  16.8GB  linux-swap(v1)                        swap

Keep on reading!

Delete an Offline RAID6 virtual drive and create a new one with AVAGO storcli

Offline virtual device means it cannot be used because the missing or bad or failed disks are more than the fault tolerance it is offering. In this case, there is a RAID 6 on a AVAGO MegaRAID 3018 controller with 2 x RAID6 virtual drives with 6 disks each. One of the virtual drives misses 3 of the 6 disks in the group, so this virtual drive is in Offline state and it cannot be repaired. Three new disks are put to replace the failed disks. Here is what command to issue with the AVAGO command-line utility storcli under CentOS 7 to delete and then create a healthy new RAID 6 virtual drive:

  1. Delete the Offline virtual drive.
  2. Create a new RAID 6 virtual drive with 6 disks.
  3. Initialize the newly create virtual drive to make it consistent.

On each step, it is included additional show storcli commands to better preset what happens in reality and how the controller reflects the changes.
The initial state of the whole configuration is shown below:

[root@srv ~]# /opt/MegaRAID/storcli/storcli64 /c0 show
Generating detailed summary of the adapter, it may take a while to complete.

CLI Version = 007.0709.0000.0000 Aug 14, 2018
Operating system = Linux 3.10.0-957.1.3.el7.x86_64
Controller = 0
Status = Success
Description = None

Product Name = AVAGO 3108 MegaRAID
Serial Number = FW-AC5CMJEAARBWA
SAS Address =  500304802426b600
PCI Address = 00:01:00:00
System Time = 09/20/2022, 14:09:12
Mfg. Date = 00/00/00
Controller Time = 09/20/2022, 14:09:08
FW Package Build = 24.21.0-0028
BIOS Version = 6.36.00.2_4.19.08.00_0x06180202
FW Version = 4.680.00-8290
Driver Name = megaraid_sas
Driver Version = 07.705.02.00-rh1
Current Personality = RAID-Mode 
Vendor Id = 0x1000
Device Id = 0x5D
SubVendor Id = 0x15D9
SubDevice Id = 0x809
Host Interface = PCI-E
Device Interface = SAS-12G
Bus Number = 1
Device Number = 0
Function Number = 0
Drive Groups = 2

TOPOLOGY :
========

----------------------------------------------------------------------------
DG Arr Row EID:Slot DID Type  State BT      Size PDC  PI SED DS3  FSpace TR 
----------------------------------------------------------------------------
 0 -   -   -        -   RAID6 OfLn  N  43.654 TB dflt N  N   dflt N      N  
 0 0   -   -        -   RAID6 Dgrd  N  43.654 TB dflt N  N   dflt N      N  
 0 0   0   -        -   DRIVE Msng  -  10.913 TB -    -  -   -    -      N  
 0 0   1   8:1      13  DRIVE Onln  N  10.913 TB dflt N  N   dflt -      N  
 0 0   2   8:2      10  DRIVE Onln  N  10.913 TB dflt N  N   dflt -      N  
 0 0   3   -        -   DRIVE Msng  -  10.913 TB -    -  -   -    -      N  
 0 0   4   8:4      11  DRIVE Onln  N  10.913 TB dflt N  N   dflt -      N  
 0 0   5   -        -   DRIVE Msng  -  10.913 TB -    -  -   -    -      N  
 1 -   -   -        -   RAID6 Optl  N  43.654 TB dflt N  N   dflt N      N  
 1 0   -   -        -   RAID6 Optl  N  43.654 TB dflt N  N   dflt N      N  
 1 0   0   8:6      20  DRIVE Onln  N  10.913 TB dflt N  N   dflt -      N  
 1 0   1   8:7      19  DRIVE Onln  N  12.732 TB dflt N  N   dflt -      N  
 1 0   2   8:8      18  DRIVE Onln  N  10.913 TB dflt N  N   dflt -      N  
 1 0   3   8:9      15  DRIVE Onln  N  10.913 TB dflt N  N   dflt -      N  
 1 0   4   8:10     12  DRIVE Onln  N  10.913 TB dflt N  N   dflt -      N  
 1 0   5   8:11     14  DRIVE Onln  N  10.913 TB dflt N  N   dflt -      N  
----------------------------------------------------------------------------

DG=Disk Group Index|Arr=Array Index|Row=Row Index|EID=Enclosure Device ID
DID=Device ID|Type=Drive Type|Onln=Online|Rbld=Rebuild|Dgrd=Degraded
Pdgd=Partially degraded|Offln=Offline|BT=Background Task Active
PDC=PD Cache|PI=Protection Info|SED=Self Encrypting Drive|Frgn=Foreign
DS3=Dimmer Switch 3|dflt=Default|Msng=Missing|FSpace=Free Space Present
TR=Transport Ready

Virtual Drives = 2

VD LIST :
=======

------------------------------------------------------------------
DG/VD TYPE  State Access Consist Cache Cac sCC      Size Name     
------------------------------------------------------------------
0/0   RAID6 OfLn  RW     No      RAWBD -   ON  43.654 TB storage1 
1/1   RAID6 Optl  RW     Yes     RAWBD -   ON  43.654 TB storage2 
------------------------------------------------------------------

Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked|
Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency

Physical Drives = 12

PD LIST :
=======

---------------------------------------------------------------------------------
EID:Slt DID State DG      Size Intf Med SED PI SeSz Model                Sp Type 
---------------------------------------------------------------------------------
8:0       9 UGood -  12.732 TB SATA HDD N   N  512B ST14000NM001G-2KJ103 D  -    
8:1      13 Onln  0  10.913 TB SATA HDD N   N  512B ST12000NM0007-2A1101 U  -    
8:2      10 Onln  0  10.913 TB SATA HDD N   N  512B ST12000NM0007-2A1101 U  -    
8:3      17 UGood -  12.732 TB SATA HDD N   N  512B ST14000NM001G-2KJ103 D  -    
8:4      11 Onln  0  10.913 TB SATA HDD N   N  512B ST12000NM001G-2MV103 U  -    
8:5      16 UGood -  12.732 TB SATA HDD N   N  512B ST14000NM001G-2KJ103 D  -    
8:6      20 Onln  1  10.913 TB SATA HDD N   N  512B ST12000NM0007-2A1101 U  -    
8:7      19 Onln  1  12.732 TB SATA HDD N   N  512B ST14000NM001G-2KJ103 U  -    
8:8      18 Onln  1  10.913 TB SATA HDD N   N  512B ST12000NM0007-2A1101 U  -    
8:9      15 Onln  1  10.913 TB SATA HDD N   N  512B ST12000NM0007-2A1101 U  -    
8:10     12 Onln  1  10.913 TB SATA HDD N   N  512B ST12000NM0007-2A1101 U  -    
8:11     14 Onln  1  10.913 TB SATA HDD N   N  512B ST12000NM0007-2A1101 U  -    
---------------------------------------------------------------------------------

EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info
SeSz-Sector Size|Sp-Spun|U-Up|D-Down/PowerSave|T-Transition|F-Foreign
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded
CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded


Cachevault_Info :
===============

------------------------------------
Model  State   Temp Mode MfgDate    
------------------------------------
CVPM02 Optimal 28C  -    2018/01/11 
------------------------------------

The show storcli command for the first virtual drive “/c0/v0” is also possible:

[root@srv ~]# /opt/MegaRAID/storcli/storcli64 /c0/v0 show all
CLI Version = 007.0709.0000.0000 Aug 14, 2018
Operating system = Linux 3.10.0-957.1.3.el7.x86_64
Controller = 0
Status = Success
Description = None


/c0/v0 :
======

------------------------------------------------------------------
DG/VD TYPE  State Access Consist Cache Cac sCC      Size Name     
------------------------------------------------------------------
0/0   RAID6 OfLn  RW     No      RAWBD -   ON  43.654 TB storage1 
------------------------------------------------------------------

Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked|
Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency


PDs for VD 0 :
============

---------------------------------------------------------------------------------
EID:Slt DID State DG      Size Intf Med SED PI SeSz Model                Sp Type 
---------------------------------------------------------------------------------
8:1      13 Onln   0 10.913 TB SATA HDD N   N  512B ST12000NM0007-2A1101 U  -    
8:2      10 Onln   0 10.913 TB SATA HDD N   N  512B ST12000NM0007-2A1101 U  -    
8:4      11 Onln   0 10.913 TB SATA HDD N   N  512B ST12000NM001G-2MV103 U  -    
---------------------------------------------------------------------------------

EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info
SeSz-Sector Size|Sp-Spun|U-Up|D-Down/PowerSave|T-Transition|F-Foreign
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded
CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded


VD0 Properties :
==============
Strip Size = 1.0 MB
Number of Blocks = 93746888704
VD has Emulated PD = Yes
Span Depth = 1
Number of Drives Per Span = 6
Write Cache(initial setting) = WriteBack
Disk Cache Policy = Disk's Default
Encryption = None
Data Protection = Disabled
Active Operations = None
Exposed to OS = Yes
OS Drive Name = N/A
Creation Date = 19-12-2018
Creation Time = 06:11:08 AM
Emulation type = default
Cachebypass size = Cachebypass-64k
Cachebypass Mode = Cachebypass Intelligent
Is LD Ready for OS Requests = Yes
SCSI NAA Id = 600304802426b60023ac9d7c0a7a305b
SCSI Unmap = No

Keep on reading!

Surviving 3 disks failure of RAID 6 with AVAGO 3108 MegaRAID and foreign config

Whatever is the reason to end up with 3 broken hard disks in a RAID 6 setup it does not matter! What matters is to recover the data if possible and the most important thing in this situation is to find the LAST hard disk, which was marked as failed and removed from the array. Then the array gets a offline state immediately! So if the last broken hard disk might have a little light of life, probably it is easy to recover the data. The hardware controller is an additional Supermicro board – AOC-S3108L-H8iR.
What happened – third disk got failed status and a virtual device using RAID 6 setup is in offline state. In offline state the virtual device would not execute any READ or WRITE operations, because part of the data is missing and the virtual drive has no meaningfully user data.
To survive to backup the data:

  1. Power off the server. And it is better to remove the power cord, afterwards, and wait for at least a minute before plugging back the power cord back.
  2. Power on the server.
  3. When prompted for actions during initialization the AVAGO 3108 MegaRAID just continue the server loading without accepting any changes.
  4. Boot a recovery disk and using the AVAGO command-line (cli) tool dump the “events” in a file. A sample command might be:
    /opt/MegaRAID/storcli/storcli64 /c0 show events >show.event.log
    

    Assuming the Offline RAID 6 virtual drive is “/c0”. Other possible options are “/c1”, “/c2” and so on.

  5. Read from the end till start the AVAGO 3108 MegaRAID events dump and find which hard drive was marked as failed LAST, i.e. with the most latest date and time. And then there are events marking the the virtual device as Offline.
    seqNum: 0x00009a46
    Time: Mon Jun 27 01:49:54 2022
    
    Code: 0x00000072
    Class: 0
    Locale: 0x02
    Event Description: State change on PD 10(e0x08/s5) from ONLINE(18) to FAILED(11)
    Event Data:
    ===========
    Device ID: 16
    Enclosure Index: 8
    Slot Number: 5
    Previous state: 24
    New state: 17
    
    
    seqNum: 0x00009a47
    Time: Mon Jun 27 01:49:54 2022
    
    Code: 0x00000051
    Class: 0
    Locale: 0x01
    Event Description: State change on VD 00/0 from DEGRADED(2) to OFFLINE(0)
    Event Data:
    ===========
    Target Id: 0
    Previous state: 2
    New state: 0
    

    The first event of the list above logs the hard drive PD 10(e0x08/s5) gets FAILED status. And immediately after that the virtual drive VD 00/0 goes Offline, which means the last disk before the RAID 6 virtual drive stops working is the PD 10(e0x08/s5)The “/s5” from PD 10(e0x08/s5) points to the “Slot 5” hard drive.

  6. Reboot the server and when prompted the AVAGO 3108 MegaRAID BIOS Configuration Utility this time enter the utility.
  7. Make the found hard drive from the previous steps with ONLINE state. The hard drive might be in a foreign configuration or just in a bad state, so import the foreign configuration, make the drive a GOOD state and its state will immediately be ONLINE, which mean it is a part of an existing virtual drive. The virtual drive state will immediately be changed to DEGRADED (still two broken disks are out of the virtual drive). Follow the screenshots below to get the last broken disk back ONLINE and the virtual drive in an operable state – DEGRADED. If the drive is only in BAD/FAILED state, just skip the Foreign part and make the disk ONLINE (it may require first to make the disk “unconfigured-good”)
  8. Recover the data by simply copy it to another server or a healthy virtual drive. DO NOT TRY TO REMOVE data, i.e. do not use “rm”, the real state of this third broken disk is unknown and writing would probably kill it off. A good idea is to mount the filesystems on this virtual drive read-only and just rsync the data to a backup.

Here is the process of getting the third disk on “Slot 5” from a “Missing” and the “Virtual Drive 0Offline to the ONLINE state of the hard drive and a DEGRADED state of the “Virtual Drive 0“, i.e. operating.

SCREENSHOT 1) The Drive in Slot 5 is missing and the Virtual Drive 0 is in OFFLINE state

Slot 5 is the hard drive we need to recover, but it reports the hard drive is missing. Missing points out there is another configuration, so press “Ctrl+N” to change to the next page (i.e. menu), which is “PD Mgmt” – physical disk management.

main menu
slot 5 disk missing and VD offline

Keep on reading!

Really bad performance when going from Write-Back to Write-Through in a LSI controller

Ever wonder what is the impact of write-through of an LSI controller in a real-world streaming server? Have no wonder anymore!

you can get several (multiple?) times slower with the write-through mode than if your controller were using the write-back mode of the cache

And it could happen any moment because when charging the battery of the LSI controller and you have set “No Write Cache if Bad BBU” the write-through would kick in. Of course, you can make a schedule for the battery charging/discharging process, but in general, it will happen and it will hurt your IO performance a lot!

In simple words a write operation is successful only if the controller confirms the write operation on all disks, no matter the data has already been in the cache.

This mode puts pressure on the disks and Write-Through is a known destroyer of hard disks! You can read a lot of administrator’s feedback on the Internet about crashed disks using write-through mode (and sometimes several simultaneously on one machine losing all your data even it would have redundancy with some of the RAID setups like RAID1, RAID5, RAID6, RAID10 and so).

srv ~ # sudo megacli -ldinfo -lall -aall
                                     
Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :system
RAID Level          : Primary-1, Secondary-0, RAID Level Qualifier-0
Size                : 13.781 TB
Sector Size         : 512
Mirror Data         : 13.781 TB
State               : Optimal
Strip Size          : 128 KB
Number Of Drives per span:2
Span Depth          : 6
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy   : Disk's Default
Encryption Type     : None
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only

Exit Code: 0x00

As you can see our default cache policy is WriteBack and “No Write Cache if Bad BBU”, the BBU is not bad, but charging!
Keep on reading!

Dell Inspiron 7352 with 16G DDR3L single SODIMM RAM – it works!

What a great piece of hardware is Dell Inspiron 7352! In the year 2015, this laptop was on the market with 8G DDR3L SODIMM and this was the maximum single SODIMM RAM available at the moment! Still, DELL had decided to make this laptop with memory slot not soldered on the motherboard! After around two years Crucial made available a single 16G DDR3L SODIMM – Crucial CT25664BF160B DDR3L 1600 MT/s PC3L-12800 SODIMM 204 Pin Memory 16gb and DELL made a BIOS update, which made the laptop to support 16G RAM!
This article is to confirm that

DELL Inspiron 13 7352 laptop (i7-5500U 2.40GHz) works perfectly with Crucial 16G DDR3L!

The BIOS is A08 (11/13/2015), probably it would work with all later version after A08, too! At present, there are newer BIOS versions, but we haven’t flashed our BIOS yet!
Keep on reading!

Update supermicro X10SLH-F firmware BIOS under Linux with the SUM cli

As you can see our product is:

product: X10SLH-F/X10SLM+-F

The same string is in our KVM IPMI: “Product Name: X10SLH-F/X10SLM+-F” and in the BIOS, but if you go the supermicro site you will find that

  • X10SLH-F has C226 chipset (supports video in the CPU)
  • X10SLM+-F has C224 chipset

and because we use the video in the CPU we know our motherboard is X10SLH-F and we downloaded the BIOS firmware for it. You also could check your chipset with lshw command.

STEP 1) Download and unpack the SUM (Supermicro Update Manager) and the BIOS zip file

Unpack the SUM (Supermicro Update Manager), here you can find a detail information about SUM – Update supermicro server’s firmware BIOS under linux with the SUM cli

[root@srv1 ~]# tar xzvf sum_2.0.0_Linux_x86_64_20171108.tar.gz 
sum_2.0.0_Linux_x86_64/
sum_2.0.0_Linux_x86_64/ReleaseNote.txt
sum_2.0.0_Linux_x86_64/sum
sum_2.0.0_Linux_x86_64/ExternalData/
sum_2.0.0_Linux_x86_64/ExternalData/VENID.txt
sum_2.0.0_Linux_x86_64/ExternalData/SMCIPID.txt
sum_2.0.0_Linux_x86_64/driver/
sum_2.0.0_Linux_x86_64/driver/RHL4_x86_64/
sum_2.0.0_Linux_x86_64/driver/RHL4_x86_64/sum_bios.ko
sum_2.0.0_Linux_x86_64/driver/RHL6_x86_64/
sum_2.0.0_Linux_x86_64/driver/RHL6_x86_64/sum_bios.ko
sum_2.0.0_Linux_x86_64/driver/RHL5_x86_64/
sum_2.0.0_Linux_x86_64/driver/RHL5_x86_64/sum_bios.ko
sum_2.0.0_Linux_x86_64/driver/RHL7_x86_64/
sum_2.0.0_Linux_x86_64/driver/RHL7_x86_64/sum_bios.ko
sum_2.0.0_Linux_x86_64/SUM_UserGuide.pdf
[root@srv1 ~]# unzip x10slh8_510.zip
Archive:  x10slh8_510.zip
   creating: x10slh8.510/
  inflating: x10slh8.510/AFUDOSU.SMC  
  inflating: x10slh8.510/ami.bat     
  inflating: x10slh8.510/Readme for AMI BIOS.txt  
  inflating: x10slh8.510/x10slh8.510  
[root@srv1 ~]# cd sum_2.0.0_Linux_x86_64
sum_2.0.0_Linux_x86_64/                 sum_2.0.0_Linux_x86_64_20171108.tar.gz  
[root@conv1 ~]# cd sum_2.0.0_Linux_x86_64

STEP 2) Flash the BIOS file with sum cli.

Here you can see what to expect flashing the BIOS firmware.

[root@srv1 sum_2.0.0_Linux_x86_64]# ./sum -c UpdateBios --file ../x10slh8.510/x10slh8.510 
Supermicro Update Manager (for UEFI BIOS) 2.0.0 (2017/11/08) (x86_64)
Copyright©2017 Super Micro Computer, Inc. All rights reserved
Reading BIOS flash ..................... (100%)
Checking BIOS ID ...
Writing BIOS flash ..................... (100%)
Verifying BIOS flash ................... (100%)
Checking ME Firmware ...
Putting ME data to BIOS ................ (100%)
Writing ME region in BIOS flash ...
 - Update success for /FDT!!
 - Updated Recovery Loader to OPRx
 - Updated FPT, MFSB, FTPR and MFS
 - ME Entire Image done
WARNING:Must power cycle or restart the system for the changes to take effect!
[root@srv1 sum_2.0.0_Linux_x86_64]# reboot

During the BIOS flashing your console could have seemed unresponsive for several minutes, but it is OK, the flash process is about 10 minutes. Then reboot and wait for several automatic resets of your system and after that when your system reaches the OS boot you should reboot again and reset your BIOS to the optimized defaults and then you can tune it as it was before.

In some rear cases you could receive “Critical Error” – “FDT is different.” you should reboot and repeat the procedure, more information here – Update supermicro server’s firmware BIOS under linux with the SUM cli

Bonus

Some commands to find the exact information for the server motherboard.

[root@srv1 ~]# lshw|grep -A 14 "core$"
  *-core
       description: Motherboard
       product: X10SLH-F/X10SLM+-F
       vendor: Supermicro
       physical id: 0
       version: 1.01
       serial: ZM1111111111
       slot: To be filled by O.E.M.
     *-firmware
          description: BIOS
          vendor: American Megatrends Inc.
          physical id: 0
          version: 3.0a
          date: 12/17/2015
          size: 64KiB
[root@srv1 ~]# lspci |grep -i c226
00:1f.0 ISA bridge: Intel Corporation C226 Series Chipset Family Server Advanced SKU LPC Controller (rev 05)
conv2 ~ # lspci -vvv|grep -i c226
00:1f.0 ISA bridge: Intel Corporation C226 Series Chipset Family Server Advanced SKU LPC Controller (rev 05)
        Subsystem: Super Micro Computer Inc C226 Series Chipset Family Server Advanced SKU LPC Controller

Supermicro server cannot enter BIOS with F2, DEL or other when UEFI mode OS is installed

If you happen to have a supermicro server (X10SLH-F) and install Linux in UEFI mode in our case CentOS 7 and you want to enter the BIOS you’ll be surprised that you cannot with the keys provided in the very same BIOS boot screen – F2, DEL. The F11 and F12 also does not work for menu selection and network boot!

Even if you manage to press the DEL key and you see on the screen “Entering BIOS setup…” – the server WON’T enter BIOS, but will continue with the UEFI BIOS boot drive!

So what to do? Ammm break temporary your system by removing (renaming or moving) the EFI directory in your efi boot partition, resetting your server and holding pressed DEL key (again) on all start up screens of the server. When the UEFI BIOS boot entry is not valid any more and there are no other boot devices (and probably because we pressed DEL key) we were able to enter in the BIOS without remote hands on the collocation side or any other intervention on the server.

[root@srv ~]# mv /boot/efi/EFI/ /boot/efi/EFI_org
[root@srv ~]# reboot

This is the path in CentOS 7 and our standard partition layout:

[root@srv ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3         26G  4.5G    20G  19% /
devtmpfs         7.8G     0   7.8G   0% /dev
tmpfs            7.8G     0   7.8G   0% /dev/shm
tmpfs            7.8G  8.5M   7.8G   1% /run
tmpfs            7.8G     0   7.8G   0% /sys/fs/cgroup
/dev/sda2        976M   98M   812M  11% /boot
/dev/sda1        200M  9.8M   191M   5% /boot/efi
tmpfs            1.6G     0   1.6G   0% /run/user/0

DO NOT forget to remove all other (virtual) CD/DVD ROM Devices and temporary disable your network PXE Server (if you have any in the network)

Because it when the UEFI BIOS cannot find the EFI file saved in the UEFI BIOS BOOT drive it might follow the boot order before entering the BIOS!

Enter the bios by remote console on our X9 boards with UEFI bios

Apparently there is an issue with X8 and X9 supermicro boards in UEFI mode BIOS: https://www.supermicro.com/support/faqs/faq.cfm?faq=14029
So for someone it could be useful pressing and holding “ESC” + “-” or F4 to enter the UEFI BIOS, but we could not make it because of the IPMI KVM we used to manage the server.

Install the new storcli to manage (LSI/AVAGO/Broadcom) MegaRAID controller under CentOS 7

After the acquisition of LSI there was a major change with the management console utility for the MegaRAID controllers. The utility was renamed from MegaCli (MegaCli64, megacli) to

storcli (storcli64)

We have new controllers like AVAGO MegaRAID SAS-9361-4i and really old ones like LSI 2108 MegaRAID (in fact Supermicro AOC-USAS2LP-H8iR) and the two controllers could be manage with the new cli. even the old controller, which is on more than 8 years could be manage by the new cli.
Interesting fact is that the storcli output and argument syntax and is almost identical to the one really old cli – tw_cli – the 3Ware management utility. As you know LSI bought 3ware RAID adapter business in 2009.
Keep on reading!