Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
---
html:
toc: true
offline: true
export_on_save:
html: true
---
# KSB 웹툴킷 사용시, 호스트 PC의 SSH 포트 변경 및 hostname 변경 방법
---
만약 자신의 PC 환경 아래와 같다고 가정한 경우, KSB 웹툴킷 docker 이미지 변경 방법에 대해서 알려드립니다.
hostname은 소문자로 설정하시기 바랍니다. hadoop을 사용하려고 할때 uppercase 사용시 문제가 됩니다
```
HOST PC hostname : csleserver
HOST PC IP : 192.168.1.102
HOST PC SSH PORT : 22
HOST PC 사용자 계정 : csle
```
# [HOST PC] vi etc/hosts에 추가
아래와 같이 설정합니다.
```
127.0.0.1 localhost
#127.0.1.1 csle1
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.1.102 csleserver master
```
# [HOST PC] sshd_config port 변경 혹은 확인
```
Port 22
# Authentication:
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes
```
# [HOST PC] sudo vi /etc/ssh/ssh_config
Host localhost
StrictHostKeyChecking no
Host 0.0.0.0
StrictHostKeyChecking no
Host 127.0.0.1
StrictHostKeyChecking no
Host csle*
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
Host master
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
# [HOST PC] startDockerCsle.sh 수정
```
cd ~/ksb-csle/docker/1.0-host
vi startDockerCsle.sh
```
```
#!/bin/bash
sudo service postgresql stop
docker network rm csle_cluster csle_standalone
docker rm -f csle1
echo "csle1 slave container..."
echo "start csle1 slave container..."
docker run --rm -itd \
--net=host \
-v /home/csle/ksb-csle:/home/csle/ksb-csle \
-v /etc/localtime:/etc/localtime:ro \
-v /etc/timezone:/etc/timezone \
--user=csle \
--name=csle1 \
[수정] --hostname=$HOSTNAME \
ksbframework/ksb_toolbox:1.0.0 bash
docker exec -it csle1 bash
```
# [HOST PC] ksb 웹툴킷 실행
```
./startDockerCsle.sh
```
# [docker 컨테이너] Hadoop 수정
```
cd
cd hadoop/etc/hadoop
```
## core-site.xml 수정
```
vi core-site.xml
:%s/csle1/csleserver/g
```
## master 수정
```
vi masters
:%s/csle1/csleserver/g
```
## slaves 수정
```
vi slaves
:%s/csle1/csleserver/g
```
## yarn-site.xml 수정
```
vi yarn-site.xml
:%s/csle1/csleserver/g
```
## hadoop-env.sh 추가
```
vi hadoop-env.sh
export HADOOP_SSH_OPTS="-p 2243"
```
# [docker 컨테이너] Hbase 수정
## hbase-site.xml 수정
```
cd ~/hbase/conf
vi hbase-site.xml
:%s/csle1/csleserver/g
```
## regionservers 수정
```
vi regionservers
:%s/csle1/csleserver/g
```
## hbase-env.sh 수정
```
vi hbase-env.sh
export HBASE_SSH_OPTS="-p 2243"
```
# [docker 컨테이너] startService.sh 수정
cd
vi startService.sh
```
#!/bin/bash
export TERM=xterm
stty rows 36 cols 150
sudo service ssh restart
sudo service postgresql restart
bash /home/csle/zookeeper-3.4.9/bin/zkServer.sh start
[수정] ssh csle@master -p 2243 "cd /home/csle/zookeeper-3.4.9/bin; ./zkServer.sh start"
[수정] ssh csle@csleserver -p 2243 "cd /home/csle/zookeeper-3.4.9/bin; ./zkServer.sh start"
start-dfs.sh
start-yarn.sh
start-hbase.sh
/home/csle/kafka/bin/kafka-server-start.sh /home/csle/kafka/config/server.properties &
/home/csle/ui_of_csle/apache-tomcat-7.0.81/bin/catalina.sh start &
/home/csle/start-mongo.sh &
sleep 5
cd /home/csle/ksb-csle/bin
/home/csle/ksb-csle/bin/startKnowledge_service.sh localhost 9876
```
# [docker 컨테이너] sshd_config port 변경
sudo vi /etc/ssh/sshd_config
```
Port 2243
# Authentication:
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes
```
# [docker 컨테이너] zookeeper 수정
vi ~/zookeeper/conf/zoo.cfg
```
server.1=csleserver:2888:3888
```
# [docker 컨테이너] kafka 수정
vi ~/kafka/config/server.properties
```
advertised.listeners=PLAINTEXT://csleserver:9092
zookeeper.connect=csleserver:2181
```
# initHdfs.sh 수정
vi ~/ksb-csle/bin/initHdfs.sh
```
#!/bin/bash
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/model
hdfs dfs -chown ksbuser_etri_re_kr:supergroup /user/ksbuser_etri_re_kr/
hdfs dfs -chown ksbuser_etri_re_kr:supergroup /user/ksbuser_etri_re_kr/dataset
hdfs dfs -chown ksbuser_etri_re_kr:supergroup /user/ksbuser_etri_re_kr/model
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/input
hdfs dfs -put /home/csle/ksb-csle/examples/input/input_kmeans.csv /user/ksbuser_etri_re_kr/dataset/input/
hdfs dfs -put /home/csle/ksb-csle/examples/input/201509_2.csv /user/ksbuser_etri_re_kr/dataset/input/
hdfs dfs -put /home/csle/ksb-csle/examples/input/adult.csv /user/ksbuser_etri_re_kr/dataset/input/
hdfs dfs -put /home/csle/ksb-csle/examples/input/input.csv /user/ksbuser_etri_re_kr/dataset/input/
hdfs dfs -put /home/csle/ksb-csle/examples/input/trainset.csv /user/ksbuser_etri_re_kr/dataset/input/
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/BatchAutoMLTrainInSingleEngine/hue_train_dataset
hdfs dfs -put /home/csle/ksb-csle/examples/dataset/BatchAutoMLTrainInSingleEngine/hue_train_dataset/*.* /user/ksbuser_etri_re_kr/dataset/BatchAutoMLTrainInSingleEngine/hue_train_dataset
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/model/autosparkml/test/0000
hdfs dfs -put /home/csle/ksb-csle/examples/autosparkml/test/automl_test/0000/* /user/ksbuser_etri_re_kr/model/autosparkml/test/0000
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/tensorflowTrainSource/recurrent
hdfs dfs -put /home/csle/ksb-csle/components/src/main/python/recurrent/* /user/ksbuser_etri_re_kr/dataset/tensorflowTrainSource/recurrent
hdfs dfs -chown -R ksbuser_etri_re_kr:supergroup /user/ksbuser_etri_re_kr/dataset/tensorflowTrainSource/
hdfs dfs -chown -R ksbuser_etri_re_kr:supergroup /user/ksbuser_etri_re_kr/model/autosparkml/
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/pyModules/ChatbotServing
hdfs dfs -put /home/csle/ksb-csle/examples/pyModules/ChatbotServing/* /user/ksbuser_etri_re_kr/dataset/pyModules/ChatbotServing
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/model/kangnam
hdfs dfs -put /home/csle/ksb-csle/examples/models/kangnam/model/0001 /user/ksbuser_etri_re_kr/model/kangnam
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/input/traffic/
hdfs dfs -put /home/csle/ksb-csle/examples/input/traffic_kangnam_cols.txt /user/ksbuser_etri_re_kr/dataset/input/traffic/
hdfs dfs -put /home/csle/ksb-csle/examples/input/traffic_kangnam_cols2.txt /user/ksbuser_etri_re_kr/dataset/input/traffic/
hdfs dfs -put /home/csle/ksb-csle/examples/input/trafficStreamingSplitSample.json /user/ksbuser_etri_re_kr/dataset/input/traffic/
hdfs dfs -put /home/csle/ksb-csle/examples/input/traffic_processing.csv /user/ksbuser_etri_re_kr/dataset/input/traffic/
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/tensorflowTrainSource/kangnam
hdfs dfs -put /home/csle/ksb-csle/components/src/main/python/kangnam/* /user/ksbuser_etri_re_kr/dataset/tensorflowTrainSource/kangnam
hdfs dfs -mkdir -p /user/ksbuser_etri_re_kr/dataset/iris_dataset
hdfs dfs -put /home/csle/ksb-csle/examples/dataset/iris_dataset/*.* /user/ksbuser_etri_re_kr/dataset/iris_dataset
hdfs dfs -chown -R ksbuser_etri_re_kr:supergroup /user/ksbuser_etri_re_kr/dataset/iris_dataset
```
# [docker 컨테이너] hdfs 초기화
hostname이 변경되었으므로 hdfs namenode를 초기화해야 합니다.
```
cd
sudo rm -rf data
hdfs namenode -format
sudo service ssh start
start-dfs.sh
cd ksb-csle/bin/
./initHdfs.sh
stop-dfs.sh
sudo service ssh stop
```
# [docker 컨테이너] ksb-csle/conf/ksb.conf 수정
vi ~/ksb-csle/conf/ksb.conf
```
hadoop {
home = "/hadoop/"
[수정] master = "csleserver"
port = "9000"
hdfs {
activated = "true"
baseDir = "/user/"
modelPath = "/model"
datasetPath = "/dataset"
}
webhdfs {
port = "50070"
baseDir = "/webhdfs/v1"
}
```
# [docker 컨테이너] UI 웹툴킷 설정 수정
```
sudo service postgresql start
/home/csle/ui_of_csle/apache-tomcat-7.0.81/bin/catalina.sh start &
csleserver:8080 접속 및 로그인 합니다.에러는 무시해도 됩니다.
Management - system configuration- webhdfs.host : csleserver 수정
/home/csle/ui_of_csle/apache-tomcat-7.0.81/bin/catalina.sh stop
sudo service postgresql stop
```
# [HOST PC] docker 이미지 저장
docker commit csle1 ksbframework/ksb_toolbox:1.0.1
# [HOST PC] startDockerCsle.sh 수정
```
#!/bin/bash
sudo service postgresql stop
docker network rm csle_cluster csle_standalone
docker rm -f csle1
echo "csle1 slave container..."
docker run --rm -itd \
--net=host \
-v /home/csle/ksb-csle:/home/csle/ksb-csle \
-v /etc/localtime:/etc/localtime:ro \
-v /etc/timezone:/etc/timezone \
--user=csle \
--name=csle1 \
--hostname=$HOSTNAME \
[수정] ksbframework/ksb_toolbox:1.0.1 bash
docker exec -it csle1 bash
```
# [HOST PC] ksb 웹툴킷 실행
```
cd ~/ksb-csle/docker/1.0-host$
./startDockerCsle.sh
```
# [docker 컨테이너] KSB 툴박스 초기화 및 예제 실행시 주의사항
```
cd
./startService.sh
cd ~/ksb-csle/bin
./startKsbApiServing.sh
```
###[주의사항]
탑재되어 있는 예제에는 csle1 주소를 기준으로 작성되어 있습니다.
따라서, kafka reader / writer 예제에서 다음의 파라미터들의 변경이 필요합니다.
```
bootStrapServer 주소 : csleserver:9092
zooKeeperConnect 주소: csleserver:2181
```
아래 예제들에서 kafka reader/writer를 사용하므로 파라미터의 수정이 필요합니다.
```
2.5.7.RealtimeIngestToPredictInSingleEngine
2.5.8.RealtimeIngestToServingInTwoEngines
2.5.9.RealtimeIngestToServingWithKbInTwoEngines
2.5.12.TfStreamPredictionMnist
2.6.1.TrafficPreprocessing
2.6.6.TrafficStreamingPredict
2.6.7.TrafficEndToEnd
```
또한, hdfs://csle1:9000 주소에 file write 하는 예제들도 hdfs://csleserver:9000로 자신의 hostname에 맞도록 수정되어야 합니다.
```
아래 예제들의 파라미터의 수정이 필요합니다.
2.5.10.HourlyTensorflowTraining
2.5.14.TrafficPeriodicTrainAndK8sServingExample
2.6.2.TrafficTraining
2.6.4.TrafficStreamServing
```
2.5.12.TfStreamPredictionMnist 예제의 경우, 테스트 코드를 수정한 후 테스트 합니다.
vi ~/ksb-csle/examples/models/mnist/client/kafka-json/consume-mnist-output.sh
```
#!/bin/bash
KAFKA_CONSUMER=$KSB_HOME/tools/kafka_2.11-0.10.0.1/bin/kafka-console-consumer.sh
SERVER=csleserver:9092
TOPIC=mnist_output
$KAFKA_CONSUMER --zookeeper csleserver:2181 --bootstrap-server $SERVER --topic $TOPIC
```
vi ~/ksb-csle/examples/models/mnist/client/kafka-json/publish-mnist-input.sh
```
#!/bin/bash
INPUT_FILE=mnist_input.json
KAFKA_PRODUCER=/home/csle/ksb-csle/tools/kafka_2.11-0.10.0.1/bin/kafka-console-producer.sh
[수정] BROKERS=csleserver:9092
TOPIC=mnist_input
cat $INPUT_FILE | $KAFKA_PRODUCER --broker-list $BROKERS --topic $TOPIC
```
2.5.13.ConvergedServingEndToEndExample 예제를 실행하기 전, 아래와 같이 수정 후 실행합니다.
```
vi ~/ksb-csle/ksblib/ksblib/dockerize/base.py
:%s/csle1/csleserver/g
:%s/2243/22/g
```