2016년 9월 22일 목요일

Pine64 ubuntu 16.04.1 xenial Ceph Storage Installation Hands on

Pine64 ubuntu 16.04.1 xenial
Ceph Storage Installation Hands on







목차



1.CEPH storage 권장 OS

http://docs.ceph.com/docs/jewel/start/os-recommendations/
http://ceph.com/releases/v10-2-0-jewel-released/

DISTRO COMPATIBILITY

Starting with Infernalis, we have dropped support for many older distributions so that we can move to a newer compiler toolchain (e.g., C++11). Although it is still possible to build Ceph on older distributions by installing backported development tools, we are not building and publishing release packages for ceph.com.
We now build packages for the following distributions and architectures:
  • x86_64:
      - CentOS 7.x. We have dropped support for CentOS 6 (and other RHEL 6 derivatives, like Scientific Linux 6).
      - Debian Jessie 8.x. Debian Wheezy 7.x’s g++ has incomplete support for C++11 (and no systemd).
      - Ubuntu Xenial 16.04 and Trusty 14.04. Ubuntu Precise 12.04 is no longer supported.
      - Fedora 22 or later.
  • aarch64 / arm64:
      - Ubuntu Xenial 16.04.




OS 확인 : ubuntu xenial (16.04.1 LTS)
ubuntu@admin:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.1 LTS
Release: 16.04
Codename: xenial




2.Pine64 ceph storage 구성품

5 x pine64 for storage node
1 x Raspberry Pi 3 for admin / monitor
1 x Netgear GS108E ethernet switch
5 x 16G MicroSD for OS
5 x 32G MicroSD for Storage
5 x USB3.0 MicroSD reader
1 x PC power supply 5V 30A







3.테스트 환경 구성도


Node Name
ceph01
ceph02
ceph03
ceph04
ceph05
admin
IP Address
10.0.1.21
10.0.1.22
10.0.1.23
10.0.1.24
10.0.1.25
10.0.1.26
Functionality
Ceph OSD Node
Ceph OSD node
Ceph OSD node
Ceph OSD node
monitor
Ceph OSD Node
monitor
Admin




4.Hostname 변경

작업대상 : ceph01 ceph02 ceph03 ceph04 ceph05 admin
ubuntu@localhost:~$ sudo hostnamectl set-hostname ceph01

5.불필요한 패키지 삭제

작업대상 : ceph01 ceph02 ceph03 ceph04 ceph05 admin
sudo apt-get remove --purge libreoffice* -y
sudo apt-get remove --purge gimp* -y
sudo apt-get remove --purge firefox* -y
sudo apt-get clean
sudo apt-get autoremove

6.필요 패키지 설치

작업대상 : ceph01 ceph02 ceph03 ceph04 ceph05 admin
~$ apt-get install btrfs-tools -y

7.시스템 부트를 Text mode 로 변경 (run level 3)

작업대상 : ceph01 ceph02 ceph03 ceph04 ceph05
ubuntu@ceph05:~$ sudo systemctl set-default multi-user.target

8./etc/hosts 등록

작업대상 : ceph01 ceph02 ceph03 ceph04 ceph05 admin
ubuntu@admin:~$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 kevin-desktop


# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters


10.0.1.21 ceph01
10.0.1.22 ceph02
10.0.1.23 ceph03
10.0.1.24 ceph04
10.0.1.25 ceph05
10.0.1.26 admin

9.ceph User 생성

작업대상 : ceph01 ceph02 ceph03 ceph04 ceph05 admin
ubuntu@admin:~$ sudo useradd -b /var/lib -m ceph -s /bin/bash
ubuntu@admin:~$ sudo passwd ceph
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Password : welcome1 로 일괄 적용

10.ceph usersudoers 에 등록 후 로그인

작업대상 : ceph01 ceph02 ceph03 ceph04 ceph05 admin
ubuntu@admin:~$ echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
ubuntu@admin:~$ sudo chmod 0440 /etc/sudoers.d/ceph



11.각 노드에 ceph user access 를 위한 ssh key 전송

작업대상 : admin
ssh-keygen 생성시 passphase 는 입력하지 않고 enter 를 입력.
ubuntu@admin:~$ su - ceph
Password:

ceph@admin:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/ceph/.ssh/id_rsa):
Created directory '/var/lib/ceph/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/ceph/.ssh/id_rsa.
Your public key has been saved in /var/lib/ceph/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:4KIDqKg9y3gBODyjl6n3JG0ed8UG7O/djmE8kGZjOVM ceph@admin
The key's randomart image is:
+---[RSA 2048]----+
| |
| . |
|o . o E |
|=+ . o o + |
|+ooo. . S # |
|+.=o . B * |
|oo+.= . . . = |
|o++* o . . o = |
|oo++o . o.o |
+----[SHA256]-----+


ubuntu@admin:~$ ssh-copy-id ceph@ceph01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ubuntu/.ssh/id_rsa.pub"
The authenticity of host 'ceph01 (10.0.1.21)' can't be established.
ECDSA key fingerprint is SHA256:10+26IDg1MawZjIS6y8iDnzb/3majyh+C1mVyznGJ68.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ceph@ceph01's password:

Now try logging into the machine, with: "ssh 'ceph@ceph01'"
and check to make sure that only the key(s) you wanted were added.

ceph@admin:~$ ssh-copy-id ceph@ceph02
...
ceph@admin:~$ ssh-copy-id ceph@ceph03
...
ceph@admin:~$ ssh-copy-id ceph@ceph04
...
ceph@admin:~$ ssh-copy-id ceph@ceph05
...

12.ceph repository 등록

작업대상 : admin
ceph@admin:~$ wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
OK
ceph@admin:~$ echo deb https://download.ceph.com/debian-jewel/ xenial $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
deb https://download.ceph.com/debian-jewel/ xenial xenial main

13.apt update

작업대상 : admin
ceph@admin:~$ sudo apt-get update -y && sudo apt-get upgrade -y

14.ceph-deploy 설치

작업대상 : admin
ceph@admin:~$ sudo apt-get install ceph-deploy -y
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
ceph-deploy
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Need to get 0 B/96.3 kB of archives.
After this operation, 617 kB of additional disk space will be used.
Selecting previously unselected package ceph-deploy.
(Reading database ... 168830 files and directories currently installed.)
Preparing to unpack .../ceph-deploy_1.5.34_all.deb ...
Unpacking ceph-deploy (1.5.34) ...
Setting up ceph-deploy (1.5.34) ...

15.Monitor node 생성

작업대상 : admin
ceph@admin:~$ ceph-deploy new ceph04 ceph05
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.35): /usr/bin/ceph-deploy new ceph04 ceph05
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7654f378>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['ceph04', 'ceph05']
[ceph_deploy.cli][INFO ] func : <function new at 0x76533d70>
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph04][DEBUG ] connected to host: admin
[ceph04][INFO ] Running command: ssh -CT -o BatchMode=yes ceph04
[ceph04][DEBUG ] connection detected need for sudo
[ceph04][DEBUG ] connected to host: ceph04
[ceph04][DEBUG ] detect platform information from remote host
[ceph04][DEBUG ] detect machine type
[ceph04][DEBUG ] find the location of an executable
[ceph04][INFO ] Running command: sudo /bin/ip link show
[ceph04][INFO ] Running command: sudo /bin/ip addr show
[ceph04][DEBUG ] IP addresses found: [u'10.0.1.24']
[ceph_deploy.new][DEBUG ] Resolving host ceph04
[ceph_deploy.new][DEBUG ] Monitor ceph04 at 10.0.1.24
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph05][DEBUG ] connected to host: admin
[ceph05][INFO ] Running command: ssh -CT -o BatchMode=yes ceph05
[ceph05][DEBUG ] connection detected need for sudo
[ceph05][DEBUG ] connected to host: ceph05
[ceph05][DEBUG ] detect platform information from remote host
[ceph05][DEBUG ] detect machine type
[ceph05][DEBUG ] find the location of an executable
[ceph05][INFO ] Running command: sudo /bin/ip link show
[ceph05][INFO ] Running command: sudo /bin/ip addr show
[ceph05][DEBUG ] IP addresses found: [u'10.0.1.25']
[ceph_deploy.new][DEBUG ] Resolving host ceph05
[ceph_deploy.new][DEBUG ] Monitor ceph05 at 10.0.1.25
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph04', 'ceph05']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.0.1.24', '10.0.1.25']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

16.ceph.conf 수정

작업대상 : admin
ceph@admin:~$ vi ceph.conf
[global]
fsid = 7bf83b5c-18f3-496a-b1b3-f54316162a68
mon_initial_members = ceph04, ceph05
mon_host = 10.0.1.24,10.0.1.25
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd pool default size = 2
public_network = 10.0.0.0/23
ceph@admin:~$

17.ceph install

Install admin 을 제외한 다른 노드에서 ceph 계정으로 로그인 되어 있는지 확인 후 로그아웃 한다.
Ceph 계정으로 로그인 되어 있는 상태에서 install 시 에러가 발행 할 수 있다.
작업대상 : admin
ceph@admin:~$ ceph-deploy install ceph01 ceph02 ceph03 ceph04 ceph05 admin
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy install ceph01 ceph02 ceph03 ceph04 ceph05 admin
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x765786e8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : True
[ceph_deploy.cli][INFO ] func : <function install at 0x765baa30>
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['ceph01', 'ceph02', 'ceph03', 'ceph04', 'ceph05', 'admin']
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None




ceph-deploy install 에러 발생 시 install/uninstall 방법
install :
node 에서 root로 로그인 한 후 다음 명령으로 인스톨,

1. ~$ sudo wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -

2. ~$ echo deb https://download.ceph.com/debian-jewel/ xenial $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

3. ~$ sudo env ubuntu_FRONTEND=noninteractive ubuntu_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install -o pkg::Options::=--force-confnew ceph ceph-mds radosgw

uninstall :
각 노드에서 강제 삭제시 아래와 같이 수행.
~$ sudo env ubuntu_FRONTEND=noninteractive ubuntu_PRIORITY=critical apt-get --assume-yes -q -f --force-yes remove --purge ceph ceph-mds ceph-common ceph-fs-common radosgw

18.Monitor node initial

작업대상 : admin
ceph@admin:~$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.35): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x76452918>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7654c8b0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph04 ceph05
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph04 ...
[ceph04][DEBUG ] connection detected need for sudo
[ceph04][DEBUG ] connected to host: ceph04 
[ceph04][DEBUG ] detect platform information from remote host
[ceph04][DEBUG ] detect machine type
[ceph04][DEBUG ] find the location of an executable[ceph05][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph05.asok mon_status
[ceph05][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph05/keyring auth get-or-create client.admin osd allow * mds allow * mon allow *
[ceph05][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph05/keyring auth get-or-create client.bootstrap-mds mon allow profile bootstrap-mds
[ceph05][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph05/keyring auth get-or-create client.bootstrap-osd mon allow profile bootstrap-osd
[ceph05][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph05/keyring auth get-or-create client.bootstrap-rgw mon allow profile bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpSKaOCe


19.OSD 설정

작업대상 : admin
ceph@admin:~$ ceph-deploy osd create ceph01:sda1 ceph02:sda1 ceph03:sda1 ceph04:sda1 ceph05:sda1 --fs-type btrfs
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy osd create ceph01:sda1 ceph02:sda1 ceph03:sda1 ceph04:sda1 --fs-type btrfs
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [('ceph01', '/dev/sda1', None), ('ceph02', '/dev/sda1', None), ('ceph03', '/dev/sda1', None), ('ceph04', '/dev/sda1', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x764ccda0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : btrfs
[ceph_deploy.cli][INFO ] func : <function osd at 0x764b11b0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparingsudo apt-get remove compiz* cluster ceph disks ceph01:/dev/sda1: ceph02:/dev/sda1: ceph03:/dev/sda1: ceph04:/dev/sda1:
[ceph01][DEBUG ] connection detected need for sudo








20.OSD PREPARE

작업대상 : admin
ceph@admin:~$ ceph-deploy osd prepare ceph01:sda1 ceph02:sda1 ceph03:sda1 ceph04:sda1 ceph05:sda1 --fs-type btrfs
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy osd prepare ceph01:sda1 ceph02:sda1 ceph03:sda1 ceph04:sda1 --fs-type btrfs
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [('ceph01', '/dev/sda1', None), ('ceph02', '/dev/sda1', None), ('ceph03', '/dev/sda1', None), ('ceph04', '/dev/sda1', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7648bda0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : btrfs
[ceph_deploy.cli][INFO ] func : <function osd at 0x764701b0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph01:/dev/sda1: ceph02:/dev/sda1: ceph03:/dev/sda1: ceph04:/dev/sda1:
[ceph01][DEBUG ] connection detected need for sudo






21.OSD ACTIVATE

작업대상 : admin
ceph@admin:~$ ceph-deploy osd activate ceph01:sda1 ceph02:sda1 ceph03:sda1 ceph04:sda1 ceph04:sda1
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy osd activate ceph01:sda1 ceph02:sda1 ceph03:sda1 ceph04:sda1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : activate
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x76519da0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function osd at 0x764fe1b0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('ceph01', '/dev/sda1', None), ('ceph02', '/dev/sda1', None), ('ceph03', '/dev/sda1', None), ('ceph04', '/dev/sda1', None), ('ceph05', '/dev/sda1', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph01:/dev/sda1: ceph02:/dev/sda1: ceph03:/dev/sda1: ceph04:/dev/sda1:
[ceph01][DEBUG ] connection detected need for sudo
[ceph01][DEBUG ] connected to host: ceph01




22.CEPH 상태 점검

작업대상 : admin
ceph@admin:~$ ceph quorum_status --format json-pretty


{
"election_epoch": 4,
"quorum": [
0,
1
],
"quorum_names": [
"ceph04",
"ceph05"
],
"quorum_leader_name": "ceph04",
"monmap": {
"epoch": 1,
"fsid": "7bf83b5c-18f3-496a-b1b3-f54316162a68",
"modified": "2016-09-22 19:22:24.810838",
"created": "2016-09-22 19:22:24.810838",
"mons": [
{
"rank": 0,
"name": "ceph04",
"addr": "10.0.1.24:6789\/0"
},
{
"rank": 1,
"name": "ceph05",
"addr": "10.0.1.25:6789\/0"
}
]
}
}



ceph@admin:~$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
ceph@admin:~$





ceph@admin:~$ ceph health
HEALTH_WARN too few PGs per OSD (25 < min 30)


ceph@admin:~$ ceph -s
cluster 7bf83b5c-18f3-496a-b1b3-f54316162a68
health HEALTH_OK
monmap e1: 2 mons at {ceph04=10.0.1.24:6789/0,ceph05=10.0.1.25:6789/0}
election epoch 4, quorum 0,1 ceph04,ceph05
osdmap e21: 4 osds: 4 up, 4 in
flags sortbitwise
pgmap v54: 64 pgs, 1 pools, 0 bytes data, 0 objects
20489 MB used, 100548 MB / 119 GB avail
64 active+clean

ceph@admin:~$ ceph osd pool set rbd pg_num 160
set pool 0 pg_num to 160


ceph@admin:~$ ceph osd pool set rbd pgp_num 160
set pool 0 pgp_num to 160


ceph@admin:~$ ceph -s
cluster 7bf83b5c-18f3-496a-b1b3-f54316162a68
health HEALTH_OK
monmap e1: 2 mons at {ceph04=10.0.1.24:6789/0,ceph05=10.0.1.25:6789/0}
election epoch 4, quorum 0,1 ceph04,ceph05
osdmap e30: 5 osds: 5 up, 5 in
flags sortbitwise
pgmap v90: 160 pgs, 1 pools, 0 bytes data, 0 objects
25617 MB used, 122 GB / 149 GB avail
160 active+clean


ceph@admin:~$ ceph mon stat
e1: 2 mons at {ceph04=10.0.1.24:6789/0,ceph05=10.0.1.25:6789/0}, election epoch 4, quorum 0,1 ceph04,ceph05
ceph@admin:~$ ceph mds stat
e1:
ceph@admin:~$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.14549 root default
-2 0.02910 host ceph01
0 0.02910 osd.0 up 1.00000 1.00000
-3 0.02910 host ceph02
1 0.02910 osd.1 up 1.00000 1.00000
-4 0.02910 host ceph03
2 0.02910 osd.2 up 1.00000 1.00000
-5 0.02910 host ceph04
3 0.02910 osd.3 up 1.00000 1.00000
-6 0.02910 host ceph05
4 0.02910 osd.4 up 1.00000 1.00000

PG 설정에 대한 문서 참조.
http://docs.ceph.com/docs/master/rados/operations/placement-groups/


ceph@admin:~$ ceph osd dump
epoch 30
fsid 7bf83b5c-18f3-496a-b1b3-f54316162a68
created 2016-09-22 19:22:55.416036
modified 2016-09-22 19:39:39.805911
flags sortbitwise
pool 0 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 160 pgp_num 160 last_change 24 flags hashpspool stripe_width 0
max_osd 5
osd.0 up in weight 1 up_from 4 up_thru 29 down_at 0 last_clean_interval [0,0) 10.0.1.21:6800/10656 10.0.1.21:6801/10656 10.0.1.21:6802/10656 10.0.1.21:6803/10656 exists,up 8a03e5e1-5447-4612-a0c5-4fd7ef1d2b18
osd.1 up in weight 1 up_from 8 up_thru 29 down_at 0 last_clean_interval [0,0) 10.0.1.22:6800/8422 10.0.1.22:6801/8422 10.0.1.22:6802/8422 10.0.1.22:6803/8422 exists,up efcc116c-4ec1-4dd9-b7b3-72ef3a03a7e7
osd.2 up in weight 1 up_from 14 up_thru 29 down_at 0 last_clean_interval [0,0) 10.0.1.23:6800/8436 10.0.1.23:6801/8436 10.0.1.23:6802/8436 10.0.1.23:6803/8436 exists,up adf35a8b-2785-4ef3-a25e-1b3f21c15ea1
osd.3 up in weight 1 up_from 19 up_thru 29 down_at 0 last_clean_interval [0,0) 10.0.1.24:6800/9878 10.0.1.24:6801/9878 10.0.1.24:6802/9878 10.0.1.24:6803/9878 exists,up fac90ac0-b1b4-4653-96d6-961dedae9b0d
osd.4 up in weight 1 up_from 28 up_thru 29 down_at 0 last_clean_interval [0,0) 10.0.1.25:6800/11122 10.0.1.25:6801/11122 10.0.1.25:6802/11122 10.0.1.25:6803/11122 exists,up ca220f2d-7b54-4e37-b227-e03cad244614

23.RADOS TEST

작업대상 : admin 또는 linux1


ceph@admin:~$ sudo dd if=/dev/zero of=/testfile.txt bs=1024 count=10000
10000+0 records in
10000+0 records out
10240000 bytes (10 MB, 9.8 MiB) copied, 0.21831 s, 46.9 MB/s


ceph@admin:~$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
119G 100546M 20492M 16.78
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 50272M 0



ceph@admin:~$ rados mkpool rdata
successfully created pool rdata


ceph@admin:~$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
119G 100546M 20492M 16.78
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 50272M 0
rdata 1 0 0 50272M 0


ceph@admin:~$ rados put test-object-1 /testfile.txt --pool=rdata


ceph@admin:~$ rados -p rdata ls
test-object-1


ceph@admin:~$ ceph osd map rdata test-object-1
osdmap e33 pool 'rdata' (1) object 'test-object-1' -> pg 1.74dc35e2 (1.2) -> up ([3,1], p3) acting ([3,1], p3)


ceph@admin:~$ rados rm test-object-1 --pool=rdata


ceph@admin:~$ rados -p rdata ls



24.BLOCK DEVICE TEST

작업대상 : admin
ceph@admin:~$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 kevin-desktop


# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters


10.0.1.26 admin
10.0.1.21 ceph01
10.0.1.22 ceph02
10.0.1.23 ceph03
10.0.1.24 ceph04
10.0.1.25 ceph05
10.0.0.195 kevdev
10.0.1.150 linux1 # VirtualBox node




ceph@admin:~$ ceph-deploy install root@linux1
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy install root@linux1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
...
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts root@linux1
[ceph_deploy.install][DEBUG ] Detecting platform for host root@linux1 ...
root@linux1's password:
root@linux1's password:
[root@linux1][DEBUG ] connected to host: root@linux1

[root@linux1][DEBUG ] Package 1:ceph-10.2.2-0.el7.x86_64 already installed and latest version
[root@linux1][DEBUG ] Package 1:ceph-radosgw-10.2.2-0.el7.x86_64 already installed and latest version
[root@linux1][DEBUG ] Nothing to dosudo apt-get remove compiz*
[root@linux1][INFO ] Running command: ceph --version
[root@linux1][DEBUG ] ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
ceph@admin:~$

ceph@admin:~$ ceph-deploy admin root@linux1
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy admin root@linux1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x764a78c8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['root@linux1']
[ceph_deploy.cli][INFO ] func : <function admin at 0x76559070>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to root@linux1
root@linux1's password:
root@linux1's password:
[root@linux1][DEBUG ] connected to host: root@linux1
[root@linux1][DEBUG ] detect platform information from remote host
[root@linux1][DEBUG ] detect machine type
[root@linux1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
ceph@admin:~$


작업대상 : linux1
[ceph@linux1 ~]$ rbd create rbd_data --size 4096
2016-09-06 06:32:59.338104 7fdebb003d80 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2016-09-06 06:32:59.338140 7fdebb003d80 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2016-09-06 06:32:59.338142 7fdebb003d80 0 librados: client.admin initialization error (2) No such file or directory
rbd: couldn't connect to the cluster!


[ceph@linux1 ~]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring


[ceph@linux1 ~]$ rbd create rbd_data --size 4096

rbd 를 생성할때 keyring 권한문제로 접근이 안되어 에러가 발생. chmod 실행시켜 read 권한 추가.


[ceph@linux1 ceph]$ sudo rbd map rbd_data
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (6) No such device or address


[ceph@linux1 ceph]$ rbd feature disable rbd_data deep-flatten fast-diff object-map exclusive-lock


[ceph@linux1 ceph]$ sudo rbd map rbd_data
/dev/rbd0
rbd map 실행중 feature mismatch 가 발생. disable 작업 후 다시 map.


[ceph@linux1 ceph]$ sudo mkfs.ext4 -m0 /dev/rbd/rbd/rbd_data
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
262144 inodes, 1048576 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736


Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:
done


[ceph@linux1 ceph]$ sudo mount /dev/rbd/rbd/rbd_data /mnt


[ceph@linux1 ceph]$ cd /mnt


[ceph@linux1 mnt]$ ls -al
total 24
drwxr-xr-x. 3 root root 4096 Sep 6 06:40 .
dr-xr-xr-x. 17 root root 4096 Sep 5 21:25 ..
drwx------. 2 root root 16384 Sep 6 06:40 lost+found


[root@linux1 mnt]# sudo dd if=/dev/zero of=testfile.txt bs=1024 count=10000
10000+0 records in
10000+0 records out
10240000 bytes (10 MB) copied, 0.0359571 s, 285 MB/s


[root@linux1 mnt]# ls -al
합계 10024
drwxr-xr-x. 3 root root 4096 96 13:58 .
dr-xr-xr-x. 18 root root 4096 96 06:48 ..
drwx------. 2 root root 16384 96 13:56 lost+found
-rw-r--r--. 1 root root 10240000 96 13:58 testfile.txt
[root@linux1 mnt]#


[root@linux1 mnt]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
149G 122G 25900M 16.97
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 148M 0.19 62653M 48
[root@linux1 mnt]#


25.node 별 프로세스 확인

ceph@admin:~$ ps -ef|grep ceph | grep -v grep
root 1942 1628 0 922 pts/0 00:00:00 su - ceph
ceph 1951 1942 0 922 pts/0 00:00:01 -su
ceph 26485 1951 0 15:07 pts/0 00:00:00 ps -ef


ubuntu@ceph01:~$ ps -ef|grep ceph | grep -v grep
avahi 541 1 0 Sep22 ? 00:00:46 avahi-daemon: running [ceph01.local]
ceph 10656 1 0 Sep22 ? 00:05:34 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
ubuntu@ceph01:~$


ubuntu@ceph02:~$ ps -ef|grep ceph | grep -v grep
avahi 524 1 0 Sep22 ? 00:00:47 avahi-daemon: running [ceph02.local]
ceph 8422 1 0 Sep22 ? 00:05:34 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
ubuntu@ceph02:~$

ubuntu@ceph03:~$ ps -ef|grep ceph | grep -v grep
avahi 524 1 0 Sep22 ? 00:00:47 avahi-daemon: running [ceph03.local]
ceph 8436 1 0 Sep22 ? 00:05:40 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
ubuntu@ceph03:~$


ubuntu@ceph04:~$ ps -ef|grep ceph | grep -v grep
ceph 7276 1 0 Sep22 ? 00:03:26 /usr/bin/ceph-mon -f --cluster ceph --id ceph04 --setuser ceph --setgroup ceph
ceph 9878 1 0 Sep22 ? 00:05:52 /usr/bin/ceph-osd -f --cluster ceph --id 3 --setuser ceph --setgroup ceph
ubuntu@ceph04:~$


ubuntu@ceph05:~$ ps -ef|grep ceph | grep -v grep
ceph 7479 1 0 Sep22 ? 00:00:54 /usr/bin/ceph-mon -f --cluster ceph --id ceph05 --setuser ceph --setgroup ceph
ceph 11122 1 0 Sep22 ? 00:05:40 /usr/bin/ceph-osd -f --cluster ceph --id 4 --setuser ceph --setgroup ceph
ubuntu@ceph05:~$


26.에러 확인.

ubuntu@ceph01:~$ sudo journalctl -f


ubuntu@ceph02:~$ sudo journalctl -f


ubuntu@ceph03:~$ sudo journalctl -f


ubuntu@ceph04:~$ sudo journalctl -f


ubuntu@ceph05:~$ sudo journalctl -f


ubuntu@admin:~$ sudo journalctl -f

27.Refresh all data and configuration

각 노드의 모든 설정을 초기화 하고 재 설치 할 때 수행.
작업대상 : admin


ceph@admin:~$ ceph-deploy disk zap ceph01:sda1 ceph02:sda1 ceph03:sda1 ceph04:sda1 ceph05:sda1
ceph@admin:~$ ceph-deploy forgetkeys
ceph@admin:~$ ceph-deploy purge ceph01 ceph02 ceph03 ceph04 ceph05 admin
ceph@admin:~$ ceph-deploy purgedata ceph01 ceph02 ceph03 ceph04 ceph05 admin








작업대상 : ceph01 ceph02 ceph03 ceph04 ceph05 admin
ceph@admin:~$ sudo su
# apt-get autoremove -y
# apt-get autoclean -y
# rm -fr /etc/apt/source-list.d/ceph.list
# deluser --remove-home ceph
# rm -fr /var/lib/ceph
# rm -fr /var/local/osd*
# reboot




2016년 8월 23일 화요일

Solaris 11 ftp 사용자 접근제어 방법.



proftpd 사용자 접근제어방법. 두가지

1. /etc/ftpd/ftpusers 에 등록하여 제어
2. /etc/proftpd.conf 에 등록하여 제어

#### solaris 에 ftp 서비스 설치

root@kevsol1:~# pkg install pkg://solaris/service/network/ftp
           설치할 패키지:         1
       부트 환경 만들기: 아니오
백업 부트 환경 만들기: 아니오
           변경할 서비스:         2

계획 링크됨: 0/3 완료됨; 1 작업 중: zone:zone3
계획 링크됨: 1/3 완료됨; 1 작업 중: zone:zone2
계획 링크됨: 2/3 완료됨; 1 작업 중: zone:zone1
계획 linked: 3/3 done
다운로드                       패키지        파일     XFER(MB)  속도
완료됨                                1/1       111/111      0.7/0.7  297k/s

다운로드 링크됨: 0/3 완료됨; 1 작업 중: zone:zone3
다운로드 링크됨: 1/3 완료됨; 1 작업 중: zone:zone2
다운로드 링크됨: 2/3 완료됨; 1 작업 중: zone:zone1
다운로드 linked: 3/3 done
단계                                        항목
새 작업 설치                            175/175
패키지 상태 데이터베이스 업데이트      완료
이미지 상태 업데이트                 완료
빠른 조회 데이터베이스 만들기      완료
실행 링크됨: 0/3 완료됨; 1 작업 중: zone:zone3
실행 링크됨: 1/3 완료됨; 1 작업 중: zone:zone2
실행 링크됨: 2/3 완료됨; 1 작업 중: zone:zone1
실행 linked: 3/3 done

root@kevsol1:~# svcs -a | grep ftp
disabled       11:10:11 svc:/network/ftp:default

root@kevsol1:~# svcadm enable ftp

root@kevsol1:~# svcs -a | grep ftp
online         11:33:37 svc:/network/ftp:default


#### 사용자 계정 생성

# useradd -m testuser1
# useradd -m testuser2

#### 사용자 암호변경

# passwd testuser1
# passwd testuser2


testuser1 은 ftp 허용, testuser2 는 ftp 불가.



###############################################################################
/etc/ftpd/ftpusers 에 등록하여 제어. ftp 서비스 재시작 필요 없음.  ip access 제어 안됨.
###############################################################################

root@kevsol1:~# cat /etc/ftpd/ftpusers
#
# List of users denied access to the FTP server, see ftpusers(4).
#
root
daemon
bin
~~ snip ~~
zfssnap
ftp
testuser2   <<< 이 파일에 등록된 사용자는 ftp access 불가.


#### client 접속테스트

[kevin@kevdev ~]$ ftp 192.168.56.50
Connected to 192.168.56.50 (192.168.56.50).
220 ::ffff:192.168.56.50 FTP server ready
Name (192.168.56.50:testuser2): testuser2
331 Password required for testuser2
Password:
530 Login incorrect.
Login failed.
Remote system type is UNIX.




[kevin@kevdev ~]$ ftp 192.168.56.50
Connected to 192.168.56.50 (192.168.56.50).
220 ::ffff:192.168.56.50 FTP server ready
Name (192.168.56.50:testuser1): testuser1
331 Password required for testuser1
Password:
230 User testuser1 logged in
ftp>
ftp>




###############################################################################
/etc/proftpd.conf 에 등록하여 제어.   ftp 서비스 재시작 필요.
###############################################################################

다음과 같이 ptoftpd.conf 에 추가.

~~ snip ~~

# Normally, we want files to be overwriteable.
AllowOverwrite on

# Bar use of SITE CHMOD by default.
<Limit SITE_CHMOD>
  DenyAll
</Limit>

<Limit LOGIN>
  AllowUser testuser1      <<<<<<<<  허용사용자 등록.
  DenyAll
</Limit>

# Make PAM the final authority on what gets authenticated.
AuthOrder mod_auth_pam.c* mod_auth_unix.c

~~ snip ~~


#### 설정 후 ftp 서비스 재시작.

root@kevsol1:~# svcadm restart ftp



다음과 같이 group에대한 제어도 가능. 필요에 따라 설정하세요.

  <Limit LOGIN>
    AllowUser testuser1
    AllowUser testuser2
    AllowGroup testgroup
    DenyAll
  </Limit>

#### ip 에 대한 접근제어.

  <Limit LOGIN>
    Allow from  192.168.56.1
    DenyAll
  </Limit>


#### 설정 후 ftp 서비스 재시작.

root@kevsol1:~# svcadm restart ftp

2016년 3월 16일 수요일

RedHat kickstart dvd install

준비물 :
1. 운영서버의 /root/anaconda-ks.cfg
2. 설치할 OS의 iso 이미지
3. dvd writer

#1 iso 이미지 마운트

[root@kevdev test]# mount -t iso9660 -o loop /mydata/iso/OS/RHEL/rhel-server-6.4-x86_64-dvd.iso /media


#2 iso 에서 파일 추출

[root@kevdev media]# tar cf - .|(cd /mydata/test/; tar -xf -)
[root@kevdev media]# cd /mydata/test/isolinux


#3 운영서버 anaconda-ks.cfg -> ks.cfg 로 복사
[root@kevdev isolinux]# cp /mydata/temp/anaconda-ks.cfg /mydata/test/isolinux/ks.cfg

[root@kevdev isolinux]# pwd
/mydata/test/isolinux

#4 isolinux.cfg 파일에 label 추가

[root@kevdev isolinux]# vi isolinux.cfg

default vesamenu.c32
#prompt 1
timeout 600

display boot.msg

menu background splash.jpg
menu title Welcome to Red Hat Enterprise Linux 6.4!
menu color border 0 #ffffffff #00000000
menu color sel 7 #ffffffff #ff000000
menu color title 0 #ffffffff #00000000
menu color tabmsg 0 #ffffffff #00000000
menu color unsel 0 #ffffffff #00000000
menu color hotsel 0 #ff000000 #ffffffff
menu color hotkey 7 #ffffffff #ff000000
menu color scrollbar 0 #ffffffff #00000000

#### label 추가 ####
label KickStart
  menu label ^Install from KickStart file
  menu default
  kernel vmlinuz
  append ks=cdrom:/isolinux/ks.cfg initrd=initrd.img ramdisk_size=8192
#### 여기까지 ####
label linux
  menu label ^Install or upgrade an existing system
  kernel vmlinuz
  append initrd=initrd.img

-- snip --

#5 최상위 폴더에서 iso 파일 작성.
[root@kevdev isolinux]# cd /mydata/test

[root@kevdev test]# mkisofs -o /mydata/RHEL-6.4-x86_64-sds-ks.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -J -R -V "RHEL 6.4 SDS KS ISO" -input-charset utf-8 .


Using RELEA000.HTM;1 for  /RELEASE-NOTES-te-IN.html (RELEASE-NOTES-pa-IN.html)
Using RELEA001.HTM;1 for  /RELEASE-NOTES-pa-IN.html (RELEASE-NOTES-or-IN.html)
Using RELEA002.HTM;1 for  /RELEASE-NOTES-or-IN.html (RELEASE-NOTES-gu-IN.html)
Using RELEA003.HTM;1 for  /RELEASE-NOTES-gu-IN.html (RELEASE-NOTES-de-DE.html)
Using RELEA004.HTM;1 for  /RELEASE-NOTES-de-DE.html (RELEASE-NOTES-mr-IN.html)
Using RPM_G000.;1 for  /RPM-GPG-KEY-redhat-beta (RPM-GPG-KEY-redhat-release)
Using RELEA005.HTM;1 for  /RELEASE-NOTES-mr-IN.html (RELEASE-NOTES-si-LK.html)
Using RELEA006.HTM;1 for  /RELEASE-NOTES-si-LK.html (RELEASE-NOTES-it-IT.html)
Using RELEA007.HTM;1 for  /RELEASE-NOTES-it-IT.html (RELEASE-NOTES-as-IN.html)

-- snip --

 99.65% done, estimate finish Thu Mar 17 12:26:53 2016
 99.93% done, estimate finish Thu Mar 17 12:26:53 2016
Total translation table size: 2048
Total rockridge attributes bytes: 424850
Total directory bytes: 653312
Path table size(bytes): 270
Max brk space used 3dc000
1816358 extents written (3547 MB)
[root@kevdev test]#

#6 iso 파일 생성 확인.
[root@kevdev test]# ls -al /mydata
합계 3636876
drwx------. 24 kevin kevin       4096  3월 17 12:26 .
dr-xr-xr-x. 19 root  root        4096  3월  6 15:40 ..
drwxrwxr-x.  4 kevin kevin       4096  8월 14  2013 .AndroidStudioPreview

-- snip --

-rw-r--r--.  1 root  root  3719901184  3월 17 12:46 RHEL-6.4-x86_64-sds-ks.iso

-- snip --

#7 ISO파일을 가상머신에서 테스트 하던가 DVD로 구워 Physical server에서 설치 후
## root login 불가능 시

install DVD rescue boot

chroot /mnt/sysimage

passwd root


## END

2015년 12월 30일 수요일

Raspberry Pi USB audio dongle 사용방법

date : 2015-12-30

RasPi Version : Jessie
pi@raspberrypi:~ $ uname -a
Linux raspberrypi 4.4.0-rc7-v7+ #831 SMP PREEMPT Mon Dec 28 19:14:55 GMT 2015 armv7l GNU/Linux

usb audio : http://www.aliexpress.com/item/External-sound-card-High-Quality-USB-2-0-Mic-Speaker-Audio-mircophone-Converter-Sound-Card-Adapter/32428906009.html?spm=2114.01020208.3.320.qqEcd6&ws_ab_test=searchweb201556_1,searchweb201644_2_79_78_77_82_80_62_81,searchweb201560_4

USB audio 를 사용하면 깨끗한 음질을 들을 수 있다는... 사용해 보기로 했다.
하지만 결론 적으로 그닥 고품질은 아니다.

처음 usb audio dongle을 연결하면 다음과 같이 장치가 보이는지 확인한다.

pi@raspberrypi:~ $ lsusb
Bus 001 Device 007: ID 0d8c:013c C-Media Electronics, Inc. CM108 Audio Controller
Bus 001 Device 005: ID 0bda:8176 Realtek Semiconductor Corp. RTL8188CUS 802.11n WLAN Adapter
Bus 001 Device 008: ID 413c:2106 Dell Computer Corp. Dell QuietKey Keyboard
Bus 001 Device 006: ID 13ee:0003 MosArt Optical Mouse
Bus 001 Device 004: ID 1a40:0101 Terminus Technology Inc. 4-Port HUB
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. 
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

pi@raspberrypi:~ $ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA]
  Subdevices: 8/8
  Subdevice #0: subdevice #0
  Subdevice #1: subdevice #1
  Subdevice #2: subdevice #2
  Subdevice #3: subdevice #3
  Subdevice #4: subdevice #4
  Subdevice #5: subdevice #5
  Subdevice #6: subdevice #6
  Subdevice #7: subdevice #7
card 0: ALSA [bcm2835 ALSA], device 1: bcm2835 ALSA [bcm2835 IEC958/HDMI]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 1: Device [USB PnP Sound Device], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0


위와 같이 장치는 보이지만, 활성화 가 안되는 문제가 있다.

인터넷 자료를 보면, 아래와 같은 파일의 내용을 수정해  usb audio 를 활성화 하는 방법이 있는데,
/usr/share/alsa/alsa.conf
/lib/modprobe.d/aliases.conf

https://www.raspberrypi.org/forums/viewtopic.php?f=28&t=124016&p=871926#p871926

제시하는 방법을 수행하면 usb로 audio를 사용할 수 있다. 하지만, 재부팅 시 .asoundrc 파일의 card 파라메터가 원래 값으로 되돌아가는 문제가 있다. 다른 방법을 써가며 수십번 해 봤지만 모두 결과는 같았다.

마지막으로 찾은 해결책이 있는데, 이 문제는 버그로 인해 usb audio 가 활성화 되지 않는 것이었고, bug fix 버전의 volumealsa.so 파일을 아래의 사이트 에서 를 다운로드 받은 후 현재 의 파일에 덮어 쓰고, 재부팅 하면 문제가 해결된다.
https://github.com/RPi-Distro/repo/issues/9

이 사이트 에서 다운받은 파일을 다음과 같이 덮어쓴다.
덮어쓰기 전 백업 받아 두는것을 잊지 않도록 한다.

sudo cp /usr/lib/arm-linux-gnueabif/lxpanel/plugins/volumealsa.so ~/volumealsa.so.org
sudo cp volumealsa.so /usr/lib/arm-linux-gnueabif/lxpanel/plugins/volumealsa.so


결론 : usb audio 를 사용 하고자 한다면 volumealsa.so 에 현재 버전에 버그가 있으니 패치를 한 후 사용한다.

2015년 12월 24일 목요일

Build Raspberry pi to a laptop with official 7" lcd screen

라즈베리파이 정품 7" 터치스크린을 이용한 랩탑 만들기.


대충 이렇게 생긴걸 만들었다. 접었다 폈다 할 수 있는데, 노트북이라고 하기에는 좀...  두툼한 탭 같은...

아... 정말 블로그 만드는거 보고서 만드는것 만큼 귀찮구 힘드네요. 진행 내용중 궁금 하신 분들 질문 하면 자세히 내용 보강하는걸로...


이거 출력하고 싶으면 -> BearMAX3D Print

준비물 :
Raspberry Pi 2 : 1ea
Memory : 32G(samsung)
Wifi : usb dongle
keyboard & mouse : wireless

기본적인 라즈베리파이 외 아래와 같이 추가 부품이 필요하다.

1. 30cm ffc 15 pins 1.0mm pitch Flat Ribbon Flex Cable
http://www.aliexpress.com/item/Free-shipping-30cm-ffc-15-pins-1-0mm-pitch-Flat-Ribbon-Flex-Cable-15pin-20624-AWM/32345634817.html

2. laptop speaker 8R 1W 8ohm 1W 1635 16*35MM
http://www.aliexpress.com/item/Brand-new-laptop-speaker-8R-1W-8ohm-1W-1635-16-35MM/32354929856.html

3. 1pcs PAM8610 2x15W amplifier board 
http://www.aliexpress.com/item/1pcs-PAM8610-2x15W-amplifier-board-digital-two-channel-stereo-power-amplifier-board-miniature/32466368355.html

4. switch

5. 12V AC/DC adapter

6. 8.4V dc/dc step down buck(I'll use adjustable buck)

7. 5V dc/dc step down buck

8. 18650 litum ion battery x 2 (7.4V) included RE/Discharge protection circuit module

9. Barrel jack

10. Raspberry pi Official 7inch touch screen.
http://kr.element14.com/raspberry-pi/raspberrypi-display/raspberry-pi-7inch-touchscreen/dp/2473872/?&&CMP=KNC-GOO-RPI-Touch-screen&mckv=s|pcrid|54926581677&gclid=CjwKEAiAhaqzBRDNltaS0pW5mWgSJADd7cYD-RPkB1ilTkfTkg78UgWage4q_3A7-IjNbxAijRcrFBoCmgjw_wcB

11. HDMI to D-SUB converter cable for dual monitor (I am not sure if it is working ;)
http://www.aliexpress.com/item/1pcs-Video-Converter-HDMI-Male-to-VGA-RGB-Female-HDMI-to-VGA-Cable-1080P-for-PC/32451157972.html?spm=2114.01020208.3.11.Lvjvnb&ws_ab_test=searchweb201556_1_79_78_77_82_80_62,searchweb201644_0,searchweb201560_4


12. N4001 diod x 2



아래 사진은 3d print 파트 조립 방법인데, 아직 수정할게 많이 있는것 같다. 일단, Version 1 으로 조립하는 단계를 설명한다.



프린팅된 부품은 다섯가지 이다.

1.본체 body_bottom
2. 본체 덮개 body_lid
3. 힌지 베이스 hinge_base
4. 힌지 슬라이드 lcd_hinge1, lcd_hinge2
5. LCD 뒷덮개개 lcd_back_cover 

hinge_baseV1.1 파트를 아래 사진과 같이 연결하고(방향성이 있음.), 볼트로 조인다.
hinge_base는 2개이므로 좌/우 모두 조립한다.


18650 배터리는 보호회로가 있는것으로 준비한다.
7.4V 이상 필요하므로 직렬로 연결.
배터리 연결부는 참치캔을 전지 가위로 오려서 만든다. 참치캔 안쪽은 코팅이 되어 있으니, 사포로 벗겨낸다. 손조심 해야한다.
잘라낸 철 조각을 배터리의 접접으로 사용한다.
요렇게, 배터리 접점으로 구부려서 만든다.

하우징 은 오렌지 와 검은색 두 개를 출력했는데, 오렌지색 제작시 사진을 많이 찍어놓은게 없어서 대부분 검정색 으로 설명해야 하겠다.


8.4V dc/dc 변환기( buck, 가변 기능 있는것 ) 와 배터리를 배치 해 본다.
Barrel jack 은 dc/dc buck in 에 연결.  아래의 구성도를 참조 한다.


회로도 처음 그려봤다. KiCad 라는 걸 처음 써봄.

전원 스위치 조립.

위의 회로도를 참조하여 아래와 같이 다이오드 2개를 연결.


8.4V dc/dc step down buck 조립을 끝내고,

 5V dc/dc step down  연결 및 ampfilter  연결.
 라즈베리파이의  audio jack  뒷면에 있는 단자와 ampfilter  연결.


Audio 연결선 : 노란선이 left, 파란선이 right 
대충 이렇게 배치된다. 

듀얼모니터를 쓰기위해 hdmi-to-Dsub 변환기 장착. 

 DSI flat cable 도 연결하고.
 회색선은 LCD 전원선

 캡톤 테이프로 정리좀... 하고 

본체 조립 완성. 


덮개를 덮는다.


뒷테...

일단, 본체조립은 여기서 끝이고, LCD 조립인데,  블랙용 LCD 를 수리 보냈다...
수리 마치고 되돌아오면 다시 정리 하여 완성 하기로 하고...

오렌지로 완성된 사진을...

이렇게 쓰면 됨.



무선 키보드 / 마우스 사용.