"Ceph"의 두 판 사이의 차이

오픈소스 비즈니스 컨설팅
둘러보기로 가기 검색하러 가기
잔글
잔글
 
28번째 줄: 28번째 줄:
  
 
Disk -> FS (xfs, ext4, btrfs) -> OSD
 
Disk -> FS (xfs, ext4, btrfs) -> OSD
 +
  
  
 
== Tuning ==
 
== Tuning ==
 +
 +
{| border="1" cellspacing="0" cellpadding="1" style="width:100%;"
 +
|-
 +
| style="text-align: center; background-color: rgb(153, 153, 153);" | Item
 +
| style="text-align: center; background-color: rgb(153, 153, 153);" | Description
 +
|-
 +
| PG (Placement Group)
 +
| 100 PGs / OSD
 +
|}
 +
  
  
37번째 줄: 48번째 줄:
 
*[http://www.slideshare.net/ienvyou/open-stack-kilo-with-dvrcephv11 Neutron DVR and Ceph Integration, 2015.8]
 
*[http://www.slideshare.net/ienvyou/open-stack-kilo-with-dvrcephv11 Neutron DVR and Ceph Integration, 2015.8]
 
*[https://github.com/arbrandes/vault2015/blob/master/markdown/benchmark.md https://github.com/arbrandes/vault2015/blob/master/markdown/benchmark.md]
 
*[https://github.com/arbrandes/vault2015/blob/master/markdown/benchmark.md https://github.com/arbrandes/vault2015/blob/master/markdown/benchmark.md]
*[https://github.com/intel-cloud/cosbench https://github.com/intel-cloud/cosbench]<br/><br/>
+
*[https://github.com/intel-cloud/cosbench https://github.com/intel-cloud/cosbench]<br/><br/><br/>
 
[[Category:DevOps|Category:DevOps]]<br/>[[Category:OpenStack|Category:OpenStack]]<br/>[[Category:Storage|Category:Storage]]
 
[[Category:DevOps|Category:DevOps]]<br/>[[Category:OpenStack|Category:OpenStack]]<br/>[[Category:Storage|Category:Storage]]

2017년 7월 3일 (월) 13:25 기준 최신판

Ceph를 정리 합니다.

Ceph 개요

RADIOS (Reliable Autonomic Distributed Object Store) Storage Cluster로 구성

CRUSH (Controlled Replication Under Scalable Hashing) 알고리즘으로 RADOS내에 파일 저장

quorum을 위해 홀수 구성 (최소 3대 이상)

RADOS -> LIBRADOS -> RA

librados : Ceph의 기본 인터페이스, Ceph Storage Cluster Protocol

  • RADOSGW, RBD, CephFS

RBD (RADOS BLOCK DEVICE)

OSD (객체 저장 장치, Object Storage Device)

Disk -> FS (xfs, ext4, btrfs) -> OSD


Tuning

Item Description
PG (Placement Group) 100 PGs / OSD


참고 문헌