Disk system: HDD, SSD and NVMe. Hybrid SSHD hard drives

  • Server optimization,
  • System administration ,
  • Data storage ,
  • Data warehouses
  • In the articles about storage systems from the “administrator’s notes”, technologies for software organization of a disk array were practically not considered. In addition, a whole layer of relatively cheap storage acceleration scenarios using solid state drives.


    Therefore, in this article I will look at three good options for using SSD drives to speed up the storage subsystem.

    Why not just assemble an array of SSDs - a little theory and reasoning on the topic

    Most often, SSDs are considered simply as an alternative to HDDs, with higher bandwidth and IOPS. However, such a direct replacement is often too expensive (branded HP drives, for example, cost from $2,000), and the usual SAS drives are returned to the project. As an option, fast disks just used pointwise.


    In particular, it looks convenient to use an SSD for system partition or for the database section - specific performance gains can be found in. From these same comparisons it is clear that when using regular HDD bottleneck is the performance of the disk, and in the case of an SSD, the interface will be holding back. Therefore, replacing just one disk will not always give the same return as a comprehensive upgrade.


    Servers use SSDs with SATA interface, or more productive SAS and PCI-E. Most SAS server SSDs on the market are sold under the HP, Dell and IBM brands. By the way, even in branded servers you can use drives from OEM manufacturers Toshiba, HGST (Hitachi) and others, which allow you to make the upgrade as cheap as possible with similar characteristics.


    With the widespread use of SSDs, a separate access protocol for drives connected to the PCI-E bus was developed - NVM Express (NVMe). The protocol was developed from scratch and significantly exceeds the usual SCSI and AHCI in its capabilities. NVMe usually works with SSDs with PCI-E interfaces, U.2 (SFF-8639) and some M.2, which are faster than regular SSDs more than doubled. The technology is relatively new, but over time it will definitely take its place in the fastest disk systems.


    A little about DWPD and the influence of this characteristic on the choice of a specific model.

    When choosing solid-state drives with a SATA interface, you should pay attention to the DWPD parameter, which determines the durability of the drive. DWPD (Drive Writes Per Day) is the permissible number of rewrite cycles of an entire disk per day during the warranty period. Sometimes there is an alternative characteristic TBW/PBW (TeraBytes Written, PetaBytes Written) - this is the declared recording volume on the disk during the warranty period. In SSD for home use DWPD may be less than one, in so-called “server” SSDs - 10 or more.


    This difference arises due to different types memory:

      SLC NAND. The simplest type is that each memory cell stores one bit of information. Therefore, such drives are reliable and have good performance. But you have to use more memory cells, which negatively affects the cost;

      MLC NAND. Each cell already stores two bits of information - the most popular type of memory.

      eMLC NAND. The same as MLC, but the resistance to overwriting is increased thanks to more expensive and high-quality chips.

    • TLC NAND. Each cell stores three bits of information - the disk is as cheap as possible to produce, but has lowest productivity and durability. To compensate for speed losses, SLC memory is often used for internal cache.

    Thus, when replacing conventional disks with solid-state ones, it is logical to use MLC models in RAID 1, which will give excellent speed with the same level of reliability.


    It is believed that using RAID in conjunction with an SSD is not best idea. The theory is based on the fact that SSDs in RAID wear out synchronously and in certain moment All disks can fail at once, especially when rebuilding the array. However, with HDD the situation is exactly the same. Unless, damaged blocks of the magnetic surface will not even allow you to read the information, unlike an SSD.

    Still high price solid state drives makes us think about alternative uses other than spot replacement or using storage systems based only on SSDs.

    Expanding the RAID controller cache

    The speed of the array as a whole depends on the size and speed of the RAID controller cache. You can expand this cache with using SSD. The technology resembles a solution from Intel.


    When using such a cache, data that is used more often is stored on caching SSDs, from which they are read or further written to a regular HDD. There are usually two modes of operation, similar to the usual RAID: write-back and write-through.


    In the case of write-through, only reading is accelerated, and with write-back, reading and writing are accelerated.


    You can read more about these parameters under the spoiler.

      When setting up a write-through cache, writing is done both to the cache and to the main array. This does not affect writes, but speeds up reads. In addition, power outages or the entire system are no longer so terrible for data integrity;

    • The write-back setting allows you to write data directly to the cache, which speeds up read and write operations. In RAID controllers, this option can only be enabled when using a special battery that protects non-volatile memory, or when using flash memory. If you use a separate SSD as a cache, then the problem with power is no longer an issue.

    Operation usually requires a special license or hardware key. Here specific names technologies from popular manufacturers on the market:

      LSI (Broadcom) MegaRAID CacheCade. Allows you to use up to 32 SSDs for cache, with a total size of no more than 512 GB, RAID of caching disks is supported. There are several types of hardware and software keys, the cost is about 20,000 rubles;

      Microsemi Adaptec MaxCache. Allows up to 8 SSD caches in any RAID configuration. There is no need to purchase a separate license; the cache is supported in Q series adapters;

    • HPE SmartCache in 8th and 9th generation ProLiant servers. Current prices are available upon request.

    The operation of the SSD cache is extremely simple - frequently used data is moved or copied to the SSD for quick access, and less popular information remains on the HDD. As a result, the speed of working with repetitive data increases significantly.


    The following graphs illustrate the operation of an SSD-based RAID cache:



    StorageReview – comparison of the performance of different arrays when working with a database: used regular disks and their alternative based on LSI CacheCade.


    But if there is a hardware implementation, then there is probably a software equivalent for less money.

    Fast cache without a controller

    Besides software RAID There is also a software SSD cache. IN Windows Server 2012 appeared interesting technology Storage Spaces, which allows you to assemble RAID arrays from any available disks. The drives are combined into pools that already contain data volumes - a design reminiscent of most hardware storage systems. From useful features Storage Spaces can be divided into multi-tier storage (Storage Tiers) and write-back cache.



    Storage Tiers allows you to create one pool of HDD and SSD, where more popular data is stored on the SSD. The recommended ratio of SSD to HDD is 1:4-1:6. When designing, it is worth considering the possibility of mirroring or parity (analogues of RAID-1 and RAID-5), since each part of the mirror must have the same number of regular disks and SSDs.


    The write cache in Storage Spaces is no different from regular write-back in RAID arrays. Only here the required volume is “bitten off” from the SSD and by default is one gigabyte.

    A traditional storage system involves placing data on hard drives. HDD drives And solid state drives SSD. IN last years HDD capacities are growing at a rapid pace. However, their speed with random access is still low. For some applications, such as databases, cloud technologies or virtualization, required as high speed access and large volume. It turns out that using only HDD is not acceptable, and using SSD is unreasonably expensive. Using an SSD only as a cache is best ratio price/performance for the system as a whole. In this case, the data itself will be located on capacious HDDs, and expensive SSDs will provide a performance boost for random access to this data.

    Most often, an SSD cache will be useful in the following cases:

    1. When speed HDD operation IOPS on read is the bottleneck.
    2. When there are significantly more I/O operations for reading than for writing.
    3. When the volume of frequently used data smaller size SSD.

    Solution

    SSD caching is an additional cache for increased performance. One or more SSDs must be assigned to a virtual disk (luna) to be used as a cache. Please note that these SSDs will not be available for data storage. Currently, SSD cache size is limited to 2.4TB.

    When a read/write operation is performed, a copy of the data is placed on the SSD. Next time, any operation with this block will be performed directly from the SSD. As a result, this will reduce the reaction time and, as a result, increase overall performance. If, unfortunately, the SSD fails, then the data will not be lost, because The cache contains a copy of the data from the HDD.

    The SSD cache is divided into groups - blocks, each block is divided into subblocks. The nature of I/O operations for virtual disk determines the choice of block and subblock size.

    Filling the cache

    Reading data from the HDD and writing it to the SSD is called cache filling. This operation is performed in background immediately after the host performs a read or write operation. The cache operation is limited by two parameters:

    • Populate-on-read threshold
    • Populate-on-write threshold

    These values ​​are greater than zero. If they are zero, then the read or write cache does not work. According to these values, each block is associated with its read or write counter. When the host performs a read operation and the data is located in the cache, the read counter is incremented. If there is no data in the cache and the read count is greater than or equal to the Populate-on-read threshold, then the data is copied to the cache. If the counter value is less than the Populate-on-read threshold, then the data is read bypassing the cache. The situation is similar for write operations.

    SSD cache operation scenarios

    I/O Type

    The I/O type determines the SSD cache configuration. This configuration is selected by the administrator and defines the block, subblock, populate-on-read threshold and populate-on-write threshold parameters. There are three predefined configurations according to I/O types: databases, file system and web services. The administrator must select the SSD cache configuration for the virtual disk. During operation, you can change the configuration type, but in this case the contents of the cache will be reset. If the predefined configurations do not suit the load profile being used, then it is possible to specify eigenvalues parameters.



    The block size affects the cache “warm-up” time, i.e. when the most needed data will move to the SSD. If the data is located close to each other on the HDD, then it is better to use a block big size. If the data is located chaotically, then it is more logical to use a small block size.

    The subblock size also affects cache warm-up time. Its larger size reduces the time it takes to fill the cache, but increases the response time to a request from the host. In addition, the subblock size also affects the CPU load, throughput memory and channel.


    To calculate the approximate cache warm-up time, you can use the following method.

    • T – cache warm-up time in seconds
    • I – IOPS value for HDD with random access
    • S – I/O block size
    • D – number of HDDs
    • C – full SSD capacity
    • P - populate-on-read threshold or populate-on-write threshold

    Then T = (C*P) / (I*S*D)
    For example: 16 disks with 250 IOPS, one 480GB SSD as a cache, the nature of the load is web services (64KB) and populate-on-read threshold = 2.
    Then the warm-up time will be T = (480GB*2) / (250*64KB*16) ≈ 3932 sec ≈ 65.5 min

    Testing

    First, let's look at the process of creating an SSD cache

    1. After creating the virtual disk, click ↓, then Set SSD Caching
    2. Select Enable
    3. Select a configuration from the drop-down list
    4. Click Select Disks and select the SSDs that will be used as cache
    5. Click OK

    Restrictions

    • Only SSDs can be used as cache
    • An SSD can only be assigned to one virtual disk at a time
    • Supports up to 8 SSDs per virtual disk
    • Supports a total capacity of up to 2.4TB SSD per system
    • SSD caching requires a license that is purchased separately from the system

    results

    Test configuration:

    • HDD Seagate Constellation ES ST1000NM0011 1TB SATA 6Gb/s (x8)
    • SSD Intel SSD DC3500, SSDSC2BB480G4, 480GB, SATA 6Gb/s (x5)
    • RAID 5
    • I/O Type Database Service (8KB)
    • I/O pattern 8KB, random read 90% + write 10%
    • Virtual disk 2TB

    According to the formula, cache warm-up time T = (2TB*2) / (244*8KB*8) ≈ 275036 sec ≈ 76.4 hours




    Differences between SSD and HDD+SSD disk subsystems for virtual dedicated servers, performance comparison.

    HDD+SSD cache drives

    Principle of operation. We use fast SSD drives to cache requests to slow, but much more capacious and inexpensive HDD drives. In this mode, every call to hard drive the virtual machine is checked for presence in the cache, and if it is present in the cache, it is returned from there, rather than read from the slow disk. If the data is not found in the cache, then it is read from the HDD and written to the cache.

    Benefits of technology HDD+SSD cache. The main advantage of the HDD+SSD cache technology is the amount provided disk space. Also, servers based on this technology are cheaper, which is important for hosting start-up projects, test servers and auxiliary services.

    • Data backups
    • Volume archives with data
    • Any services and sites for which the speed of reading/writing from disks is not critical

    SSD drives

    Principle of operation. SSD (Solid-state drive) is a drive in which, unlike conventional hard drives, no moving elements. SSD uses flash memory for storage. In simple words, this is a large flash drive.

    Benefits of technology SSD. The main advantage of SSD drives is speed. Unlike usual hard drive, there is no time spent on positioning the read heads - the speed of data access increases. According to tests, the read/write speed on an SSD is several times higher than that of conventional HDDs.

    Who will benefit VDS or VPS on SSD?

    • For owners of online stores: the speed of working with databases on SSD is disproportionately higher than on HDD.
    • Owners of other sites: the pages of your site will open much faster, which is important for ranking in search engines.
    • For developers: code compilation speed on SSD drives is faster, save your time.
    • For game servers: Loading speed increases, don't make players wait.

    NVMe drives

    Principle of operation. NVM Express (NVMe, NVMHCI, Non-Volatile Memory Host Controller Interface Specification) - updated version SSD drive. Uses its own interaction protocol, developed from scratch, and connects via PCI port Express.

    Benefits of technology NVMe. Read-write with NVMe drives is 2-3 times faster than with regular SSD. The PCI Express bus does not limit disk speed - this ensures increased performance. In addition, parallel operations are processed faster on NVMe; more read-write operations are performed per unit of time.

    When to order virtual server with NVMe drive?

    • In the same cases as SSD. When your project is no longer enough SSD performance, or you are planning project growth and high loads.

    Comparing performance

    We compared performance virtual machines on “combat” physical servers with various disk subsystems.

    We took into account the number of IOPS (the number of input/output operations, Input/Output Operations Per Second) - this is one of the key parameters when measuring the performance of storage systems, hard drives and solid-state drives (SSD).

    Please note that websites most often use data reading operations rather than writing operations. This indicator SSD drives three times higher than that of HDD+SSD-cache technology.

    Technology performance comparison

    I recently ran into a speedup problem. disk subsystem, which is provided in the Lenovo U 530 ultrabook (and other similar models). It all started with the fact that the choice fell on this laptop to replace an older one.

    This series has several configurations, which can be viewed at this link: http://shop. lenovo.com/ ru/ru/laptops/ lenovo/u -series /u 530-touch /index .html #tab -"5E =8G 5A :85_E 0@0:B 5@8AB 8:8

    I took the option with an Intel Core-I 7 4500U processor, 1TB HDD + 16GB SSD cache.

    Note: this ultrabook and similar ones use an SSD in M2 format:http://en.wikipedia.org/wiki/M.2

    Later, when working with it, the presence of the cache was not observed, so I began to figure out how it all works?

    In Intel chipsets (in particular Intel Series 8) there is such technology as Intel rapid storage technology (you can read more about it at this link: http://www.intel. ru/content/www/ ru/ru/architecture -and -technology /rapid -storage -technology .html ).

    This technology has a function Intel® Smart Response , which allows the use of a hybrid option SSHD or HDD + SDD to speed up the disk subsystem.

    In short, it allows you to store frequently used files on SSD disk and upon subsequent launches of files, read them from SSD disk, which significantly improves the performance of the entire system as a whole (more about Smart Response at this link:

    2) Use Windows ReadyBoost technology (http://ru.wikipedia.org/wiki/ReadyBoost)

    3)Use option ExpressCache

    Note: many have probably seen instructions on the Internet for transferring a hybridization file to an SSD, so I checked from my own experience that this DOES NOT WORK, because even in this case, when you create a hybridization section, it is still used Intel technology Rapid Storage. In other words, the hybridization mode is already non-Windows, but this Intel technology controls it, and since it doesn’t work for us, you won’t get anything other than a useless hybridization section on the SSD, so it won’t work.

    Now I’ll describe in more detail how to configure each of the three options.

    1.Use a third-party utility from SanDisk - ExpressCache

    I’ll break down the action points:

    If you have never used this utility before, then do the following:

    1) Download it, for example from here: http://support. lenovo.com /us/ en/downloads/ds 035460

    2) Go to “Disk Management” and delete all partitions from the SSD disk;

    3)We install the Express Cache program on the computer, reboot and everything is ready) The program itself will create the required partition and will use it.


    4) To check the work, call command line in administrator mode, and enter eccmd.exe -info

    5) As a result, there should be a similar picture:

    Figure 6 - checking the cache operation when running the eccmd.exe utility - info


    2.Use Windows ReadyBoost technology

    To use this technology you must:

    2) Create one main partition on the SSD;

    3) The new partition will appear as a new disk with its own letter. Go to My Computer and click right click on the disk and in the menu select “properties”, then the “Ready Boost” tab.

    4) In the tab, select the “Use this device” option and use the slider to select all the available space.

    After this SSD will speed up work file system using Microsoft Windows Ready Boost technology.

    I don’t know how effective it is for working with SSDs, since its original purpose was to use ordinary NAND Flash in the form of key fobs as storage devices, and the access speed to such devices is much lower than that of mSATA SSD


    3.Use option ExpressCache+ transferring the SWAP file to a separate SSD partition.

    In my opinion, this is the most optimal for this case method, since, on the one hand, we speed up work with swap by moving it to SSD, and also ensure work with cache. This method rather suitable for ultra beech books with an SSD capacity of 16 GB or more.

    How to do it?

    1) Go to “Disk Management” and delete all partitions from the SSD disk;

    2) You need two partitions on the SSD, we do one ourselves, the second is done by the Express Cache program;

    3) Create a partition for swap, for example: 6 GB is quite enough for an ultra beech with 8 GB of RAM;

    5)Now we need to transfer the swap from drive C: to new disk SSD. To do this, go to the System parameters, then “ Extra options systems."


    Figure 8 - Additional system parameters

    In the “Advanced” tab, click on the “Options*” button, the “Advanced**” tab and then the “Change**” button. Disable “ Auto mode***”, then from the list we select the disk with the swap we need, and then we try to select the option “Size by system choice***” and click the “Set***” button. If the system crashes, it is most likely due to the fact that the disk is 6GB. the system considers it too small, but if you look at the bottom of the window at the recommended file size, it will fluctuate around 4.5 GB, which is even smaller than our partition, so we do the following - select the “Specify size***” option and in the “ Original size***” write down the file size recommended below. In field " Maximum size***” you can write the entire volume of the section, then click the “Set***” button.
    Next, we need to disable the existing swap; to do this, from the list of disks, select the one where this moment swap is located (for example C:), and below in the options select - “Without swap file***”, and then “Set***”.
    That's it, now your paging file will be located on the SSD drive.
    We wait for “Ok ***” and reboot the computer.

    6) You can check whether the file is on the disk or not, go to drive C: (the visibility of hidden files function must be enabled in Explorer or using Total Commander).


    Figure 12 - Visibility of SWAP partition SSD

    The page file is called pagefile . sys it should be on the new disk, but it should not be on the old one.

    7) Now you need to install a partition for caching; to do this, we do everything that was described in point 1.

    As a result, after the actions taken, we get an acceleration of the entire system as a whole.

    Figure 13 - SSD partitions for SWAP and SSD cache

    I wish you fast performance of your system and long work SSD J

    I will be glad to receive comments on my article and all kinds of reviews) Thank you!

    • Comparison of performance of different types of server drives (HDD, SSD, SATA DOM, eUSB)
    • Performance comparison of the latest Intel and Adaptec server RAID controllers (24 SSD)
    • Server RAID controller performance comparison
    • Disk subsystem performance of Intel servers based on Xeon E5-2600 and Xeon E5-2400
    • Tables comparative characteristics: RAID controllers, Server HDDs, Server SSDs
    • Links to price list sections: RAID controllers, Server HDDs, Server SSDs

    Majority server applications work with the server's disk subsystem in random access mode, when data is read or written in small blocks of several kilobytes in size, and these blocks themselves can be randomly located in the disk array.

    Hard drives have an average access time arbitrary block data is on the order of a few milliseconds. This time is necessary to position the disk head over the desired data. In one second HDD can read (or write) several hundred such blocks. This indicator reflects the performance of the hard drive on random I/O operations and is measured by IOPS (Input Output per Second, I/O operations per second). That is, the random access performance for a hard drive is several hundred IOPS.

    As a rule, in the server's disk subsystem, several hard drives are combined into a RAID array in which they operate in parallel. At the same time, the speed of random read operations for a RAID array of any type increases in proportion to the number of disks in the array, but the speed of write operations depends not only on the number of disks, but also on the method of combining disks into a RAID array.

    Quite often, the disk subsystem is the factor that limits server performance. With a large number of simultaneous requests, the disk subsystem can reach its performance limit and the volume increases random access memory or processor frequency will have no effect.

    A radical way to increase the performance of the disk subsystem is to use solid-state drives (SSD drives), in which information is written to non-volatile flash memory. For SSD drives, the access time for a random block of data is several tens of microseconds (that is, two orders of magnitude less than for hard drives), due to which the performance of even one SSD drive on random operations reaches 60,000 IOPS.

    The following graphs show comparative performance indicators for RAID arrays of 8 hard drives and 8 SSD drives. Data are provided for four various types RAID arrays: RAID 0, RAID 1, RAID 5 and RAID 6. In order not to overload the text with technical details, we have placed information about the testing methodology at the end of the article.


    The diagrams show that the use of SSD drives increases the performance of the server's disk subsystem for random access operations by 20 to 40 times. However widespread use SSD drives suffer from the following major limitations.

    Firstly, modern SSD drives have small capacity. The maximum capacity of hard drives (3TB) exceeds the maximum capacity of server SSD drives (300GB) by 10 times. Secondly, SSD drives are approximately 10 times more expensive than hard drives when comparing the cost of 1GB of disk space. Therefore, building a disk subsystem from SSD drives alone is currently used quite rarely.

    However, you can use SSD drives as RAID controller cache. Let's talk in more detail about how it works and what it gives.

    The fact is that even in a fairly large disk server subsystem with a capacity of tens of terabytes, the volume of “active” data, that is, data that is used most often, is relatively small. For example, if you are working with a database that stores records over a long period of time, only a small part of the data that relates to the current time interval will likely be actively used. Or if the server is intended for hosting Internet resources, most of the requests will relate to a small number most visited pages.

    Thus, if this "active" (or "hot") data is not on the "slow" hard drives, and in “fast” cache memory on SSD drives, the performance of the disk subsystem will increase by an order of magnitude. In this case, you do not need to worry about what data should be placed in the cache memory. After the controller reads data from the hard drive for the first time, it will leave this data in the SSD cache and read again from there.

    Moreover, caching works not only when reading, but also when writing. Any write operation will write data not to the hard drive, but to the cache memory on the SSD drives, so write operations will also be an order of magnitude faster.

    Almost a caching mechanism on SSD-drives can be implemented on any six-gigabit RAID module or second-generation RAID controller based on the LSI2208 microcontroller: RMS25CB040, RMS25CB080, RMT3CB080, RMS25PB040, RMS25PB08, RMS25PB08 0, RS25DB080, RS25AB080, RMT3PB080. These RAID modules and controllers are used in Team servers based on Intel processors E5-2600 and E5-2400 ( Intel platform Sandy Bridge).

    To use SSD caching mode, you must install the AXXRPFKSSD2 hardware key on the RAID controller. In addition to supporting SSD caching, this key also speeds up the controller's operation with bare SSD drives when they are used not as cache memory, but as regular drives. In this case, you can achieve performance on random read-write operations of 465,000 IOPS (FastPath I/O mode).

    Let's look at the performance testing results of the same array of eight hard drives, but using four SSD drives as cache memory, and compare them with the data of this array without caching.



    We performed testing for two options for organizing the SSD cache. In the first option, 4 SSD drives were combined into a zero-level RAID array (R0), and in the second case, a mirror array(R1). The second option is a little slower in write operations, but it ensures data backup in the SSD cache, so it is preferable.

    Interestingly, read and write performance is practically independent of the "main" type RAID array of hard drives disks, but is determined only by the speed of operation of SSD cache memory drives and the type of its RAID array. Moreover, "cached" RAID 6 from hard drives turns out to be faster in write operations than "pure" RAID 6 from SSD drives (29"300 or 24"900 IOPS versus 15"320 IOPS). The explanation is simple - we are actually measuring performance not RAID 6, but RAID 0 or RAID 1 cache, and these arrays are faster on writes even with fewer disks.

    You can also use one SSD drive as cache memory, but we recommend not to do this because cache memory data is not backed up. If such an SSD drive fails, the integrity of the data will be compromised. For SSD caching, it is better to use at least two SSD drives combined into a first-level RAID array (“mirror”).

    We hope that the information presented in this article will help you in choosing an effective server disk subsystem configuration. In addition, our managers and engineers are always ready to provide the necessary technical advice.

    Test bench configuration and testing methodology

    Server platform - Team R2000GZ
    Intel RES2CV360 36 Port Expander Car SAS Port Expander
    RAID controller - Intel RS25DB080 with AXXRPFKSSD2 key
    HDD - 8 SAS 2.5" drives Seagate Savvio 10K.5 300GB 6Gb/s 10000RPM 64MB Cache
    SSD - 8 or 4 SSD drive SATA 2.5" Intel 520 Series 180GB 6Gb/s

    Testing was performed using Intel programs IO Meter.

    For each hardware configuration option, the optimal settings for the controller cache memory were selected.

    The virtual disk size for testing is 50GB. This volume was chosen so that the tested disk could completely fit into the SSD cache.

    Other parameters:
    Strip Size - 256KB.
    The data block size for sequential operations is 1MB.
    The data block size for random access operations is 4 KB.
    Queue depth - 256.