Physical disks (PD) are carved up in 1GB chunklets. 3PAR OS reserves a certain number of chunklets as space chunklets depending on the sparing algorithm and system configuration. These spare chunklets are distributed across all drives.
Raid functionality is implemented at Logical disks (LD) level. Depending upon RAID level specified, LDs are formed by striping across chunklets from number of physical drives across different enclosures. LD will consist of chunklets from same type of disk drives (either 7.5K/10k/15K/SSD etc).
LD is further subdivided into 128MB regions. These regions are assigned to Virtual Volumes (VVs) which in turn are masked to the server. A single LD can have its multiple regions assigned to different servers.
There are three types of LDs:
User LD (USR LD) provide user storage space to VVs.
Snapshot data LD (SD LD) provide the storage space for snapshots (or virtual copies), TPVV, TDVV.
Snapshot Administration LD (SA LD) provide the storage space for metadata used for snapshot/TPVV/TDVV.
3PAR OS will automatically create LDs based upon parameters listed below:
For RAID5 (2<=D<=8 + 1P, ) = set size is 4
For RAID MP ([4|6|8|10|14]D + 2P) = set size is 16 (RAID MP => Multiple distributed Parity)
For RAID1 = set size is 2
Set size: Number of drives containing data.
Row size: Level of additional striping across more drives. Eg: a RAID5 LD with a row size of 2 and set size of 4 is effectively striped across 8 drives.
Step size: Number of bytes that are stored contiguously on a single PD
Every node creates LDs from the PDs it owns, thus chunklets from any given PD are owned by a single node with the partner node as the backup owner.
Common Provisioning Groups (CPG) is a virtual pool of same typed LDs. Storage space (regions) to Virtual volumes (VV)from CPG’s logical disk pool.
There are two kinds of VVs:
Base volumes: It is a fully provisioned virtual volume/TPVV/TDVV. It contains user visible data.
Snapshot volumes: As the name suggests, it is a snapshot volume and contains modified data for a given snapshot.
VVs have three type of space:
User Space: Contains data of the base VV. Uses USR LD.
Snapshot data space: Contains modified data for a given snapshot. The granularity of snapshot data mapping is 16KB pages. Uses SD LD.
Snapshot admin space: Contains metadata for snapshots. Uses SA LD.
VVs can be defined of three types:
Fully Provisioned VV (FPVV): It is a thick device with no snapshots.
Commonly Provisioned VV (CPVV): It is a thick device with snapshots.
Thinly Provisioned VV (TPVV): It has space for base volume allocated from the associated CPG, and snapshot space allocated from snapshot CPG if any. On creation, 256MB per node is allocated to a TPVV.
Thinly deduped VV (TDVV): Behave similarly to TPVV with the fundamental difference that TDVVs within the same CPG will share common pages of data. TDVVs are supported only on CPGs that use SSDs as a tier of storage.
VLUNS and lun masking:
Hosts can access VVs only after they have been exported as VLUNs. VVs can be exported in three ways:
To specific hosts: The VV is visible to the specified WWNs, regardless of which port(s) those WWNs appear on. This is a convenient way to export VVs to known hosts.
To any host on a specific port: This is useful when the hosts (or WWNs) are not known prior to exporting
To specific hosts on a specific port
Each IO received has data and metadata such as SCSI control commands. Array separates them out, and processes data in ASIC, and processes metadata/SCSI commands in Control processor.