Storage of the tomorrow

We are getting to a stage for many organizations — small and large — where finding places to store data cost-effectively, in a way that also meets the business requirements, is becoming a serious challenge. It is certainly a top five issue for most organizations on an IT perspective, and for many in their top two priorities. 

 

SEQUESTOR Enterprise Storage Overview

SEQUESTOR Enterprise Storage is High Performance ,  cost-effective, software-defined storage solution that:

  • Supports cloud infrastructure and web-scale object storage.

  • Combines the  most stable version of Linux Kernel with a storage management console, deployment tools, and support services.

  • Flexibly and automatically manages petabyte-scale data deployments so that the  enterprise can focus on managing the business.

Basic Functionalities

  • Store and retrieve data via block device

  • Replicate data

  • Monitor and report on cluster health

  • Redistribute data dynamically (remap and backfill)

  • Ensure data integrity (scrubbing)

  • Detect and recover from faults and failures

 
 

Instant tiering

A cache tier provides Sequestor Clients with better I/O performance for a subset of the data stored in a backing storage tier. Cache tiering involves creating a pool of relatively fast/expensive storage devices (e.g., solid state drives) configured to act as a cache tier, and a backing pool of either erasure-coded or relatively slower/cheaper devices configured to act as an economical storage tier.

 

Add capacity on the fly

Adding capacity to storage cluster was never easier, you can add capacity by just booting new servers over network, Sequestor master node will automatically install and include new servers in the cluster without outage and under 30 minutes.

 

 

 

 

high performance

Sequestor scale-out storage solutions can deliver up to 6.45 million input/output operations per second (IOPS) per cluster and 2000 gigabits per second (Gbps) aggregate throughput. This allows you to get the performance you need for your most demanding big data applications and workloads.

 

 

 

Features

EXABYTE SCALABILITY

  • Scale-out architecture. Grow a cluster from one to thousands of nodes. Say goodbye to forklift upgrades and data migration projects

  • Automatic rebalancing. Benefit from a peer-to-peer architecture that seamlessly handles failures and ensures data distribution throughout the  cluster.

  • Rolling software upgrades. Upgrade clusters in phases with no or minimal downtime.

HIGH Performance

  • High performing Multi-node read/write support

  • Automatic parallel read on Block device storage

  • 2TB/S Benchmark

APIs

  • S3 and Swift. Enjoy seamless cloud integration with protocols used  by Amazon Web Services and the OpenStack Object Storage project.

  • RESTful. Manage  all cluster and object storage functions programmatically. Gain independence and speed by not having  to manually provision storage.

SECURITY

  • Authentication and authorization. Integrate with Active Directory, LDAP, and KeyStone v3.

  • Policies. Limit access at the  pool, user, bucket, or data level.

  • Encryption. Implement cluster-level, at-rest encryption.

RELIABILITY AND AVAILABILITY

  • Striping, erasure coding, or replication across nodes. Enjoy data durability, high availability, and high performance.

  • Dynamic block resizing. Expand  or shrink Sequestor block devices with zero  downtime.

  • Storage policies. Configure placement of data to reflect SLAs, performance requirements, and failure  domains using the CRUSH algorithm.

  • Snapshots. Take snapshots of entire pool or individual  block devices.

PERFORMANCE-related Features

  • Client-cluster data path. Benefit from clients sharing their  I/O model across the entire cluster.

  • Copy-on-write cloning. Instantly provision tens or hundreds of virtual machine images.

  • In-memory client-side caching. Enhance client I/O using a hypervisor cache.

  • Storage-side SSD Level Caching

  • Server-side journaling. Accelerate the  write performance of data by serializing writes.

MULTI-DATACENTER SUPPORT AND DISASTER RECOVERY

  • Zones and region support. Deploy the  object storage typologies of Amazon Web Services S3.

  • Global clusters. Create a global namespace for object users with read and write affinity to local clusters.

  • Disaster recovery. Enable  multi-site replication for disaster recovery or archiving

Buit-In Archiving Solution

  • Built-In Archiving Solution supporting Disk-based Archive

Custom Hardware Support

  • Sequestor Enterprise Storage supports different server vendors  ,including Cisco, HPE, Dell and Supermicro .

Support Various Disk Types

  • Support SAS /NVMe SSD, SAS(10K, 15K RPM) and NL-SAS (7.2K RPM) Disks for Data

  • Support SSD for Cache

  • Support NL-SAS Disks for Archive

    Flexible COnfiguration Options

  • Flexible Architecture

  • Ultra-Flexible Configuration Options to meet the requirements of various environments

  • best practices for throughput and latency sensitive environments

  • Supporting various availability methods for every tier of data

Cluster Architecture

A Sequestor Storage cluster can have a large number of Sequestor nodes for limitless scalability, high availability and performance. Each node leverages non-proprietary hardware and intelligent Sequestor daemons that communicate with each other to:

Write and read data Compress data Ensure durability by replicating or erasure coding data Monitor and report on cluster health— also called 'heartbeating' Redistribute data dynamically— also called 'backfilling' Ensure data integrity; and, Recover from failures.

To the Sequestor client interface that reads and writes data, a Sequestor storage cluster looks like a simple pool where it stores data. However, librados and the storage cluster perform many complex operations in a manner that is completely transparent to the client interface. Sequestor clients and Sequestor OSDs both use the CRUSH (Controlled Replication Under Scalable Hashing) algorithm. The following sections provide details on how CRUSH enables Sequestor to perform these operations seamlessly.

Logical Architecture

Sequestor Enterprise Storage Consists of the following logical components:

  • Pools: A Sequestor storage cluster stores data objects in logical dynamic partitions called pools. Pools can be created for particular data types, such as for block devices, object gateways, or simply to separate user groups. The pool configuration dictates the number of object replicas and the number of placement groups (PGs) in the pool. storage pools can be either replicated or erasure coded, as appropriate for the application and cost model. Additionally, pools can “take root” at any position in the CRUSH hierarchy, allowing placement on groups of servers with differing performance characteristics—allowing storage to be optimized for different workloads.

  • Placement groups: maps objects to placement groups (PGs). PGs are shards or fragments of a logical object pool that are composed of a group of OSD daemons that are in a peering relationship. Placement groups provide a means of creating replication or erasure coding groups of coarser granularity than on a per object basis. A larger number of placement groups (e.g., 200 per OSD or more) leads to better balancing.

  • CRUSH ruleset: The CRUSH algorithm provides controlled, scalable, and declustered placement of replicated or erasure-coded data within Sequestor and determines how to store and retrieve data by computing data storage locations. CRUSH empowers Sequestor clients to communicate with OSDs directly, rather than through a centralized server or broker. By determining a method of storing and retrieving data by algorithm, Sequestor avoids a single point of failure, a performance bottleneck, and a physical limit to scalability.

  • Sequestor monitors : Before Sequestor clients can read or write data, they must contact a Sequestor MON to obtain the current cluster map. A Sequestor storage cluster can operate with a single monitor, but this introduces a single point of failure. For added reliability and fault tolerance, Sequestor supports an odd number of monitors in a quorum (typically three or five for small to mid-sized clusters). Consensus among various monitor instances ensures consistent knowledge about the state of the cluster.

  • Sequestor OSD daemons: In a Sequestor cluster, Sequestor OSD daemons store data and handle data replication, recovery, backfilling, and rebalancing. They also provide some cluster state information to Sequestor monitors by checking other Sequestor OSD daemons with a heartbeat mechanism. A Sequestor storage cluster configured to keep three replicas of every object requires a minimum of three Sequestor OSD daemons, two of which need to be operational to successfully process write requests. Sequestor OSD daemons roughly correspond to a file system on a physical hard disk drive.

Download Datasheet Here