KSB 웹툴킷 사용시, 호스트 PC의 SSH 포트 변경 및 hostname 변경 방법


만약 자신의 PC 환경 아래와 같다고 가정한 경우, KSB 웹툴킷 docker 이미지 변경 방법에 대해서 알려드립니다.

hostname은 소문자로 설정하시기 바랍니다. hadoop을 사용하려고 할때 uppercase 사용시 문제가 됩니다

HOST PC hostname : csleserver
HOST PC IP : 192.168.1.102
HOST PC SSH PORT : 22
HOST PC 사용자 계정 : csle

[HOST PC] vi etc/hosts에 추가

아래와 같이 설정합니다.

127.0.0.1   localhost     
#127.0.1.1  csle1      

# The following lines are desirable for IPv6 capable hosts                      
::1     ip6-localhost ip6-loopback      
fe00::0 ip6-localnet      
ff00::0 ip6-mcastprefix      
ff02::1 ip6-allnodes      
ff02::2 ip6-allrouters      

192.168.1.102 csleserver master

[HOST PC] sshd_config port 변경 혹은 확인

Port 22
# Authentication:
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes

[HOST PC] sudo vi /etc/ssh/ssh_config

Host localhost
StrictHostKeyChecking no

Host 0.0.0.0
StrictHostKeyChecking no

Host 127.0.0.1
StrictHostKeyChecking no

Host csle*
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null

Host master
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null

[HOST PC] startDockerCsle.sh 수정

cd ~/ksb-csle/docker/1.0-host
vi startDockerCsle.sh
#!/bin/bash
sudo service postgresql stop

docker network rm csle_cluster csle_standalone
docker rm -f csle1

echo "csle1 slave container..."
echo "start csle1 slave container..."
docker run --rm -itd \
                --net=host \
                -v /home/csle/ksb-csle:/home/csle/ksb-csle \
                -v /etc/localtime:/etc/localtime:ro \
                -v /etc/timezone:/etc/timezone \
                --user=csle \
                --name=csle1 \
[수정]           --hostname=$HOSTNAME \
                ksbframework/ksb_toolbox:1.0.0 bash

docker exec -it csle1 bash

[HOST PC] ksb 웹툴킷 실행

./startDockerCsle.sh

[docker 컨테이너] Hadoop 수정

cd
cd hadoop/etc/hadoop

core-site.xml 수정

vi core-site.xml
:%s/csle1/csleserver/g

master 수정

vi masters
:%s/csle1/csleserver/g

slaves 수정

vi slaves
:%s/csle1/csleserver/g

yarn-site.xml 수정

vi yarn-site.xml
:%s/csle1/csleserver/g

hadoop-env.sh 추가

vi hadoop-env.sh
export HADOOP_SSH_OPTS="-p 2243"

[docker 컨테이너] Hbase 수정

hbase-site.xml 수정

cd ~/hbase/conf
vi hbase-site.xml
:%s/csle1/csleserver/g

regionservers 수정

vi regionservers
:%s/csle1/csleserver/g

hbase-env.sh 수정

vi hbase-env.sh
export HBASE_SSH_OPTS="-p 2243"

[docker 컨테이너] startService.sh 수정

cd
vi startService.sh

#!/bin/bash
export TERM=xterm
stty rows 36 cols 150
sudo service ssh restart
sudo service postgresql restart

bash /home/csle/zookeeper-3.4.9/bin/zkServer.sh start
[수정] ssh csle@master -p 2243 "cd /home/csle/zookeeper-3.4.9/bin; ./zkServer.sh start"
[수정] ssh csle@csleserver -p 2243 "cd /home/csle/zookeeper-3.4.9/bin; ./zkServer.sh start"
start-dfs.sh
start-yarn.sh
start-hbase.sh
/home/csle/kafka/bin/kafka-server-start.sh /home/csle/kafka/config/server.properties &
/home/csle/ui_of_csle/apache-tomcat-7.0.81/bin/catalina.sh start &
/home/csle/start-mongo.sh &
sleep 5
cd /home/csle/ksb-csle/bin
/home/csle/ksb-csle/bin/startKnowledge_service.sh localhost 9876

[docker 컨테이너] sshd_config port 변경

sudo vi /etc/ssh/sshd_config

Port 2243
# Authentication:
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes

[docker 컨테이너] zookeeper 수정

vi ~/zookeeper/conf/zoo.cfg

server.1=csleserver:2888:3888

[docker 컨테이너] kafka 수정

vi ~/kafka/config/server.properties

advertised.listeners=PLAINTEXT://csleserver:9092
zookeeper.connect=csleserver:2181

initHdfs.sh 수정

vi ~/ksb-csle/bin/initHdfs.sh

#!/bin/bash
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/model
hdfs dfs -chown ksbuser_etri_re_kr:supergroup  /user/ksbuser_etri_re_kr/
hdfs dfs -chown ksbuser_etri_re_kr:supergroup  /user/ksbuser_etri_re_kr/dataset
hdfs dfs -chown ksbuser_etri_re_kr:supergroup  /user/ksbuser_etri_re_kr/model

hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/input
hdfs dfs -put /home/csle/ksb-csle/examples/input/input_kmeans.csv  /user/ksbuser_etri_re_kr/dataset/input/
hdfs dfs -put /home/csle/ksb-csle/examples/input/201509_2.csv  /user/ksbuser_etri_re_kr/dataset/input/
hdfs dfs -put /home/csle/ksb-csle/examples/input/adult.csv  /user/ksbuser_etri_re_kr/dataset/input/
hdfs dfs -put /home/csle/ksb-csle/examples/input/input.csv  /user/ksbuser_etri_re_kr/dataset/input/
hdfs dfs -put /home/csle/ksb-csle/examples/input/trainset.csv  /user/ksbuser_etri_re_kr/dataset/input/
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/BatchAutoMLTrainInSingleEngine/hue_train_dataset
hdfs dfs -put /home/csle/ksb-csle/examples/dataset/BatchAutoMLTrainInSingleEngine/hue_train_dataset/*.*  /user/ksbuser_etri_re_kr/dataset/BatchAutoMLTrainInSingleEngine/hue_train_dataset
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/model/autosparkml/test/0000
hdfs dfs -put /home/csle/ksb-csle/examples/autosparkml/test/automl_test/0000/*  /user/ksbuser_etri_re_kr/model/autosparkml/test/0000

hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/tensorflowTrainSource/recurrent
hdfs dfs -put /home/csle/ksb-csle/components/src/main/python/recurrent/* /user/ksbuser_etri_re_kr/dataset/tensorflowTrainSource/recurrent
hdfs dfs -chown -R ksbuser_etri_re_kr:supergroup  /user/ksbuser_etri_re_kr/dataset/tensorflowTrainSource/
hdfs dfs -chown -R ksbuser_etri_re_kr:supergroup  /user/ksbuser_etri_re_kr/model/autosparkml/

hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/pyModules/ChatbotServing
hdfs dfs -put /home/csle/ksb-csle/examples/pyModules/ChatbotServing/*  /user/ksbuser_etri_re_kr/dataset/pyModules/ChatbotServing

hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/model/kangnam
hdfs dfs -put /home/csle/ksb-csle/examples/models/kangnam/model/0001 /user/ksbuser_etri_re_kr/model/kangnam

hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/input/traffic/
hdfs dfs -put /home/csle/ksb-csle/examples/input/traffic_kangnam_cols.txt /user/ksbuser_etri_re_kr/dataset/input/traffic/
hdfs dfs -put /home/csle/ksb-csle/examples/input/traffic_kangnam_cols2.txt /user/ksbuser_etri_re_kr/dataset/input/traffic/
hdfs dfs -put /home/csle/ksb-csle/examples/input/trafficStreamingSplitSample.json /user/ksbuser_etri_re_kr/dataset/input/traffic/
hdfs dfs -put /home/csle/ksb-csle/examples/input/traffic_processing.csv /user/ksbuser_etri_re_kr/dataset/input/traffic/

hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/tensorflowTrainSource/kangnam
hdfs dfs -put /home/csle/ksb-csle/components/src/main/python/kangnam/* /user/ksbuser_etri_re_kr/dataset/tensorflowTrainSource/kangnam

hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/iris_dataset
hdfs dfs -put /home/csle/ksb-csle/examples/dataset/iris_dataset/*.*  /user/ksbuser_etri_re_kr/dataset/iris_dataset
hdfs dfs -chown -R ksbuser_etri_re_kr:supergroup  /user/ksbuser_etri_re_kr/dataset/iris_dataset

[docker 컨테이너] hdfs 초기화

hostname이 변경되었으므로 hdfs namenode를 초기화해야 합니다.

cd
sudo rm -rf data
hdfs namenode -format
sudo service ssh start
start-dfs.sh
cd ksb-csle/bin/
./initHdfs.sh
stop-dfs.sh
sudo service ssh stop

[docker 컨테이너] ksb-csle/conf/ksb.conf 수정

vi ~/ksb-csle/conf/ksb.conf

hadoop {
    home = "/hadoop/"
[수정]   master = "csleserver"
    port = "9000"
    hdfs {
        activated = "true"
        baseDir = "/user/"
        modelPath = "/model"
        datasetPath = "/dataset"
    }
    webhdfs {
        port = "50070"
        baseDir = "/webhdfs/v1"
    }

[docker 컨테이너] UI 웹툴킷 설정 수정

sudo service postgresql start
/home/csle/ui_of_csle/apache-tomcat-7.0.81/bin/catalina.sh start &

csleserver:8080 접속 및 로그인 합니다.에러는 무시해도 됩니다.

Management - system configuration- webhdfs.host : csleserver 수정

/home/csle/ui_of_csle/apache-tomcat-7.0.81/bin/catalina.sh stop
sudo service postgresql stop

[HOST PC] docker 이미지 저장

docker commit csle1 ksbframework/ksb_toolbox:1.0.1

[HOST PC] startDockerCsle.sh 수정

#!/bin/bash
sudo service postgresql stop

docker network rm csle_cluster csle_standalone
docker rm -f csle1

echo "csle1 slave container..."

docker run --rm -itd \
                --net=host \
                -v /home/csle/ksb-csle:/home/csle/ksb-csle \
                -v /etc/localtime:/etc/localtime:ro \
                -v /etc/timezone:/etc/timezone \
                --user=csle \
                --name=csle1 \
                --hostname=$HOSTNAME \
[수정]           ksbframework/ksb_toolbox:1.0.1 bash

docker exec -it csle1 bash

[HOST PC] ksb 웹툴킷 실행

cd ~/ksb-csle/docker/1.0-host$
./startDockerCsle.sh

[docker 컨테이너] KSB 툴박스 초기화 및 예제 실행시 주의사항

cd
./startService.sh
cd ~/ksb-csle/bin
./startKsbApiServing.sh

[주의사항]

탑재되어 있는 예제에는 csle1 주소를 기준으로 작성되어 있습니다.
따라서, kafka reader / writer 예제에서 다음의 파라미터들의 변경이 필요합니다.

bootStrapServer 주소 : csleserver:9092
zooKeeperConnect 주소: csleserver:2181

아래 예제들에서 kafka reader/writer를 사용하므로 파라미터의 수정이 필요합니다.

2.5.7.RealtimeIngestToPredictInSingleEngine 	
2.5.8.RealtimeIngestToServingInTwoEngines
2.5.9.RealtimeIngestToServingWithKbInTwoEngines
2.5.12.TfStreamPredictionMnist
2.6.1.TrafficPreprocessing
2.6.6.TrafficStreamingPredict
2.6.7.TrafficEndToEnd

또한, hdfs://csle1:9000 주소에 file write 하는 예제들도 hdfs://csleserver:9000로 자신의 hostname에 맞도록 수정되어야 합니다.

아래 예제들의 파라미터의 수정이 필요합니다.
2.5.10.HourlyTensorflowTraining
2.5.14.TrafficPeriodicTrainAndK8sServingExample
2.6.2.TrafficTraining
2.6.4.TrafficStreamServing

2.5.12.TfStreamPredictionMnist 예제의 경우, 테스트 코드를 수정한 후 테스트 합니다.

vi ~/ksb-csle/examples/models/mnist/client/kafka-json/consume-mnist-output.sh

#!/bin/bash

KAFKA_CONSUMER=$KSB_HOME/tools/kafka_2.11-0.10.0.1/bin/kafka-console-consumer.sh
SERVER=csleserver:9092
TOPIC=mnist_output

$KAFKA_CONSUMER --zookeeper csleserver:2181 --bootstrap-server $SERVER --topic $TOPIC

vi ~/ksb-csle/examples/models/mnist/client/kafka-json/publish-mnist-input.sh

#!/bin/bash

INPUT_FILE=mnist_input.json
KAFKA_PRODUCER=/home/csle/ksb-csle/tools/kafka_2.11-0.10.0.1/bin/kafka-console-producer.sh
[수정] BROKERS=csleserver:9092
TOPIC=mnist_input

cat $INPUT_FILE | $KAFKA_PRODUCER --broker-list $BROKERS --topic $TOPIC

2.5.13.ConvergedServingEndToEndExample 예제를 실행하기 전, 아래와 같이 수정 후 실행합니다.

vi ~/ksb-csle/ksblib/ksblib/dockerize/base.py
:%s/csle1/csleserver/g
:%s/2243/22/g