Saturday, October 22, 2016

HBase Application


API Available

Java API: Full-featured, does not require server daemon, just need client libraries/configs to be installed

REST API: Easy to use, stateless, but slower than Java API, require REST API server, can directly get data from http request

Thrift API: Multi-language support, light weight, require Thrift API server






Configuration
  • Key-values that define client application properties
DDL
  • HBase Admin
    create/modify/delete tables
  • H*Descriptor
    Describe and manipulate metadata
DML
  • Mutations: Incr/Delete/Put/Check-and-Put/Multiput*
Query
  • Scan/Get/Filter

















HBase System Architecture, Compare to Others





.META. 's Region Server will be well known by ZooKeeper, 
So Master Node can know where to read it by talking to ZooKeeper.
Once .META. is cached, Master can find any data lives in HBase





HBase VS Other NoSQL

  • Great large Scalibility
  • Strict Consistency
  • Availability not perfect, but can be managed 

  • Integrates with Hadoop
    Very efficient bulk loads and MapReduce analysis
  • Ordered range partitions(not Hash)
  • Automatically shards/scales(just add more servers)
  • Sparse storage(for each row)

Compare to RDBMS

Enterprise Hadoop Cluster Architecture

Enterprise Hadoop Cluster Architecture

Master: SPOF to one single node

  • NameNode, JobTracker / ResourceManager
  • Hive Metastore, HiveServer2
  • Impala StateStore, Catalog Server
  • Spark Master
Node kernel environment setup
  • Ulimit, /etc/security/limits.conf to configure nofile
    Since default Linux system only allow 1024 sessions for file system.
    For Hadoop Cluster, Habase, this is not enough, so we need to configure this.
  • THP(Transparent Huge Page), ACPI, Memory overcommit issue
    THP: A new function from Linux kernel, its cache may not compatible with Hadoop, this will cause high memory, so it's better to turn it off.
    ACPI: Power management, this may also cause high memory, better to turn off.
    Memory Overcommit: System don't give more memory to app when app has a lot commit(50% total), so we need to either configure it higher or turn it off.
  • Customize configuration on different functional node
        If High Memery? How to configure swap?
        High disk IO? Then we may not need high OverCommit.
        High CPU, high system load?
Enterprise Hadoop Cluster Data Management

HDFS config
  • HDFS Block Size: dfs.block.size
  • Replication Factor: dfs.replication, default is 3
  • Turn on dfs.permissions? fs.permissions.umask-mode
  • User permission
  • DataNode's dfs disk partition, better to be separate from Linux system partition, otherwise may cause system fail to start because of disk space.
Resource Allocation
  • CPU, Memory, Disk IO, Network IO
  • HDFS, MapReduce(Yarn), Hive jobs
  • HBase, Hive, Impala, Spark resource allocation, limitations
Enterprise Hadoop Cluster Task Scheduling

Oozie
  • dispatch HDFS, MapReduce jobs, Pig, Hive, Sqoop, Java Apps, shell, email
  • take advantage of cluster resources
  • an alternative of cron job
    Distribution start job
    Parallel start multiple jobs
    Can fail tolerance, re-try, alarm in working flow

ZooKeeper
  • Synchronize node situation
  • The communication between HBase's master server and region server
  • Synchronize region's situation for HBase's tables
    online or split?
Hadoop Cluster Monitors

Cloudera Manager's Monitor Tool
  • Strong, Full of Monitor values, Monitors well to Impala
  • Waste of system resources
Ganglia
  • Monitor by collecting Hadoop Metrics
  • Use self's gmond to collect CPU, Memory, NetworkIO
  • Collect JMX, IPC, RPC data
  • Weak point: No disk IO by default
Graphite
  • Good third party plugins, can push KPI in java applications(StatsD, Yammer Metrics)
  • Can collect server data using plugins(collectd, logster, jmxtrans)
Splunk: expensive
Nagios, Icinga


Hadoop Cluster Issue and Limitation

Issues
  • HA: Too many too many single nodes
  • Dangerous to upgrade
  • Waste too much memory
Limitations
  • Missing solutions for cross data center
  • High concurrent bottle neck
  • Impala, Spark no ability to handle too many query at the same time, and no solution yet
  • No matter which type of model, database. Join is a big issue
    Spark is still not good for this, it's more like MapReduce + Pig



































HBase Table and Storage Design

HBase is a set of tables defined by Key-Values

Definitions:

  • Row KeyEach row has a unique Row Key
  • Column FamilyUse to group different columns, defined when building the table
  • Column QualifierIdentify each column, can be added in run-time
    Expandable
    , may have different numbers of Qualifiers in each row
  • timestampA cell can have different versions of data based on different timestamps(for consistency)

A Row is referred by a Row Key
So a Column is referred by a Column Family and a Column Qualifier
A Cell is referred by a Row and a Column
A Cell will contain multiple version of values based on different timestamps

The above four things combines together to become a Key in the following format:
    [Row Key]/[Column Family]:[Column Qualifier]/[timestamp]

With this Key, we can find the unique value we want to read.

Every value we get will be in byte[] format, developer need to know the structure of value themselves.

Rows are strongly consistency.






How does HBase db stores in file system?

  1. Group continues rows into the same Region
  2. Region is divided by each Column Family to be several units
  3. Each one of these divided unit stored as a single file in file system
  4. The values in each file are in lexicographical order(I think order by Column Qualifier)
So if we open one HBase db file, it will only have
  1. Continues rows
  2. Values in the same Column Family
  3. (I think)Values are ordered by Column Qualifier




Special Table

Table .META.
This table is using the same format as regular tables.
It can be in any one node of HBase cluster, it stores information for where to find the target region.