Enterprise SSD compare

Intel SSD

S4500 (Read intensive, 1DWPD) : 240GB, 480GB, 960GB, 2TB (1.92TB), 4TB (3.84TB)

S4600 (Mixed use, 3DWPD): 240GB, 480GB, 960GB, 2TB (1.92TB)

Max. 128 KB sequential read/write – up to 500/490 MB/s

S4500: Max. 4k random read/write – up to 72k/33k IOPS

S4600: Max. 4k random read/write – up to 72k/65k IOPS

Samsung PM963

1.3 DWPD for 3 years
sequential read/write – up to 2,000/1,200 MB/s
4k random read/write – up to 430K / 40K IOPS

數據中心比線路比較 | NWT (HKBN) | IXTech

主要使用 http://ping.pe/ 網站來測試由不同國家及地區連到兩個不同數據中心的ping 值及路由.

測試得知, NWT (HKBN) 在香港及國內以外地區, 主要是使用 telia.net 及 gtt.net 作為海外骨幹線路
而 IXTech 則主要使用 Telstra Global 及 iptp.net 作為海外線路.

在測試後, 對比得得出以下結論:

在美國, IXTech 比 NWT-telia 較優勝

在意大利,  NWT-telia 比 IXTech-iptp 線數更好 (~170ms  vs ~250ms)

在新加坡,  IXTech-telstra 比 NWT-telia 更優勝, 相差十分多 (~34ms vs 230ms)
主要因為 NWT-telia 網絡路由會行走美國, 再到歐洲, 才最後到 HK
對比 IXTech-telstra 會由新加坡直連到 HK

在日本及澳洲測試,  IXTech-telstra 比 NWT-telia 更優勝, 相差大約一倍 (~160ms vs 290ms)
和新加坡情況一樣 NWT-telia 網絡路由會行走美國, 再到歐洲, 才最後到 HK

而國內線路, 大至上一樣, 也有數個地區或服務商線路會由NWT(HKBN) 取勝






WHMCS Chinese invoice suppport 中文帳單支援

在網上找過資料, 有提供方法. 但網上的方法已過數. 建議參考 WHMCS 官方方法.

下載 droidsansfallback 字型並解壓縮 , 上傳到WHMCS 空間中的 /vendor/tecnickcom/tcpdf/fonts 目錄.

登入WHMCS admin 後台, Setup->General Settings->Invoices->PDF Font Family->Custom

填上 droidsansfallback 並按最底的儲存即可.


Exim mail queue commands

Helpful Exim Commands:

/usr/sbin/exim   -M   email-id        => Force delivery of one message
/usr/sbin/exim -qf                  => Force another queue run
/usr/sbin/exim -qff                 => Force another queue run and attempt to flush the frozen message
/usr/sbin/exim  -Mvl   messageID  => View the log for the message
/usr/sbin/exim  -Mvb  messageID  => View the body of the message
/usr/sbin/exim  -Mvh  messageID  => View the header of the message
/usr/sbin/exim -Mrm  messageID  =>  Remove message without sending any error message
/usr/sbin/exim  -Mg  messageID   =>  Giveup and fail message to bounce the message to the Sender

/usr/sbin/exim -bpr | grep “<” | wc -l    =>Number of emails in the que
/usr/sbin/exim -bpr | grep frozen | wc -l   => How many Frozen mails on the queue
/usr/sbin/exim -bpr | grep frozen | awk {‘print $3’} | xargs exim -Mrm   =>  Deleteing Frozen Messages

To flush the exim queue:

1. login to your server via ssh as root.

2. Type: exim -qff

Reference from: serversitters

[Storage Space] Removal of physical disks

Removal of physical disks (including set usage, repair, remove).

Removal of a Physical Disk from an existing pool consists of the following workflow;

  • Setting the Usage property on the Physical Disk to ‘retired’ to prevent additional data from being placed on the Physical Disk.
  • Executing the Repair-VirtualDisk command on all Storage Spaces associated with the Physical Disk to remove from the storage pool.
  • Executing the Remove-PhysicalDisk command on the physical disk to remove it from the pool.

Warning: Storage Spaces configured using the “Simple” resilencysetting will be lost if any associated Physical Disk is removed.

Example of removing a PhysicalDisk

Set-PhysicalDisk –FriendlyName PhysicalDisk8 –Usage Retired

Get-PhysicalDisk –FriendlyName PhysicalDisk8 | Get-VirtualDisk | Repair-VirtualDisk

Then we would wait for the StorageSpace’s HealthStatus to change from InService, to Healthy (indicating that repairs are complete)

Remove-PhysicalDisk –FriendlyName PhysicalDisk8

Note: When executing the Repair-VirtualDisk command, if the healthstatus of the VirtualDisk does not change to InService, check to ensure that adequate Pool space exists for removal of the PhysicalDisk.  If adequate space does not exist, it may be necessary to add a new PhysicalDisk prior to removal of the old one.

You can utilize the Get-StorageJob Cmdlet to view the progress of running repair operations. After all repair operations have successfully completed, the physical disk which was set to Retired usage, can be safely removed.

Linux Integration Services Version 3.4 for Hyper-V

Linux Integration Services for Hyper-V provides the following functionality:

Linux Integration Services v3 4 Read Me

  • Driver support: Linux Integration Services supports the network controller, and the IDE and SCSI storage controllers that were developed specifically for Hyper-V.
  • Fastpath boot support for Hyper-V: Boot devices take advantage of the block Virtualization Service Client (VSC) to provide enhanced performance.
  • Time Keeping: The clock inside the virtual machine will remain accurate by synchronizing to the clock on the virtualization server via Timesync service, and with the help of the pluggable time source device.
  • Integrated shutdown: Virtual machines running Linux can be shut down from either Hyper-V Manager or System Center Virtual Machine Manager by using the “Shut down” command.
  • Symmetric multiprocessing (SMP) support: Supported Linux distributions can use multiple virtual processors per virtual machine. The actual number of virtual processors that can be allocated to a virtual machine is only limited by the underlying hypervisor.
    • SMP support is not available for 32-bit Linux guest operating systems running on Windows Server 2008 Hyper-V or Microsoft Hyper-V Server 2008.
  • Heartbeat: This feature allows the virtualization server to detect whether the virtual machine is running and responsive.
  • KVP (Key-Value Pair) Exchange: Information about the running Linux virtual machine can be obtained by using the Key-Value Pair Exchange functionality on the Windows Server 2008 virtualization server.
  • Integrated mouse support: Linux Integration Services provides full mouse support for Linux guest virtual machines.
  • Live Migration: Linux virtual machines can undergo live migration for load balancing purposes.
  • Jumbo Frames: Linux virtual machines can be configured to use Ethernet frames with more than 1500 bytes of payload.
  • VLAN tagging and trunking: Administrators can attach single or multiple VLAN ids to synthetic network adapters.