前言
本手册用于统一团队认知,不对外提供任何解释。
贡献者
许可证
一键导入 SSH Key
echo -e "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCp/NNMT6qG7RdlQgjukf8Xyz4eixZwstj+rL4pNm5D1X2nB/rZXXJHV+JkwCW7wB2gkxuKdxm5zMRckaRnzmjJvo9v61xSxtcxkLgXgLMK/wzr0QLzw2Yfr4zQXALpXk6f0R+6BDrw9utiVy85RrUV3lfwyOWWcrISuLa7P+qSfRVr5dWtZnt8sP0YjmEK00yFquUZgaQ5kglNn9ANiADDb6mLUwSCFMMvXHZErRIXcHIFc7ZqzoBnnd7txkiVwpgm8I1sq5Ja4VNu6ce76j7OdHbIf80zyvQ0Y7AMjE0Q31HVu+1FRVEFmFt4YPy5FGRIgeq/XR0n2DihT0GvW61b\nssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+79OhNQ2LOXmCEFmHkTZTleFPud9XoL1i89YN3IrzDBfPpeUf+digCKYiPLO+SNCa9xY4oIRQrudMST+80TIuMoxOde/qzOc3WE1qxn6k6BqBZ6JO6bJgFafsCwkfQJIkrjAftzvkeVOrbcRx0roxZd/hb6lV2ZydaD8VgWK1v1mfgwwxu1F0Nv/8+a4vu1kPHRZed75mXf+GecZhSTObmKyYOLJcqJxJHL4EzfSaa6F4Nn7UC8dxxZ8YVxT4Pfnvxpo/w/GnieBrdXWiZ8sq5+T6jMo+8FG1+6vP4l+EbtXeZEJZM5kdEgN962b8BS1Hnh+Q2s2Sb98orD0FskaL" >> ~/.ssh/authorized_keys
获取本机当前公网 IP 的几种方法
方法 1
curl https://myip.ipip.net
方法 2
curl https://cip.cc
方法 3
curl https://ipinfo.io
方法 4
curl https://realip.cc
方法 5
curl https://api.live.bilibili.com/ip_service/v1/ip_service/get_ip_addr
方法 6
curl https://ip.network/more
SSH
生成 key
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
群发 key
cat target-host-list | xargs -n 1 ssh-copy-id
BBR
内核版本 >= 4.9
echo "net.core.default_qdisc = fq" >> /etc/sysctl.conf
echo "net.ipv4.tcp_congestion_control = bbr" >> /etc/sysctl.conf
sysctl -p
lsmod | grep bbr
Swap
https://wiki.archlinux.org/index.php/swap
fallocate -l 2G /swapfile && chmod 600 /swapfile && mkswap /swapfile && swapon /swapfile && echo '/swapfile none swap defaults 0 0' >> /etc/fstab
树莓派自动调整 Swap 分区
sudo vim /etc/dphys-swapfile
把 CONF_SWAPSIZE=100
注释掉
sudo systemctl restart dphys-swapfile
JAR Restart
run.sh
#!/usr/bin/env bash
BASE_PATH=$(pwd)
JAR_FILENAME=$1
LOG_FILENAME="$JAR_FILENAME.$(date +%s).log"
JAR_PATH="$BASE_PATH/$JAR_FILENAME"
LOG_PATH="$BASE_PATH/$LOG_FILENAME"
CMD="java -jar $JAR_PATH"
KILL_CMD="ps aux | grep '$CMD' | grep -v grep | awk '{print \$2}' | xargs kill -9"
bash -c "$KILL_CMD"
bash -c "$CMD 1>$LOG_PATH 2>&1 &"
Usage
bash -x run.sh YOUR_JAR.jar
高并发服务器调小 TCP 的 time_wait 超时时间
操作系统默认 240 秒后,才会关闭处于 time_wait
状态的连接。
在高并发访问下,服务器端会因为处于 time_wait
的连接数太多,可能无法建立新的连接,所以需要在服务器上调小此等待值。
vim /etc/sysctl.conf
net.ipv4.tcp_fin_timeout = 30
sysctl -p
OR
echo "net.ipv4.tcp_fin_timeout = 30" >> /etc/sysctl.conf
sysctl -p
使用 cat 替代 dd
# dd version
dd if=image.iso of=/dev/sdb bs=4M
# cat version
cat image.iso >/dev/sdb
# cat version with progress meter
cat image.iso | pv >/dev/sdb
iptables 恢复默认设置
# 默认策略设置为 ACCEPT,否则后续操作会导致连接断开,无法再次连接
sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -t nat -P PREROUTING ACCEPT
sudo iptables -t nat -P POSTROUTING ACCEPT
sudo iptables -t nat -P OUTPUT ACCEPT
# 清空所有规则
sudo iptables -F
sudo iptables -t nat -F
# 删除所有非系统默认的链
sudo iptables -X
sudo iptables -t nat -X
V2ray
Install
https://github.com/v2fly/fhs-install-v2ray
See Also
精选集·使用 Rust 编写的命令行工具
按照字母顺序排序
bandwhich
https://github.com/imsnif/bandwhich
按进程、网络连接、远程 IP、主机名显示当前网络利用率
bottom
https://github.com/ClementTsang/bottom
系统监视器
fd
简单、快速、用户友好的 'find' 替代方案
lemmeknow
https://github.com/swanandx/lemmeknow
可用于识别神秘文本、从捕获的网络数据包中分析硬编码字符串
mdBook
https://github.com/rust-lang/mdBook
根据 Markdown 文件创建在线书籍
monolith
https://github.com/y2z/monolith
用于将完整的网页保存为单个 HTML 文件
ouch
https://github.com/ouch-org/ouch
为终端提供无痛压缩和解压
topgrade
https://github.com/topgrade-rs/topgrade
更新系统和环境
ELK 日志收集系统
Create elk user
useradd -m -s /bin/bash elk
Elasticsearch
vim conf/elasticsearch.conf
network.host: YOUR_SERVER_IP
discovery.type: single-node
bin/elasticsearch -d
curl YOUR_SERVER_IP:9200
Kibana
vim config/kibana.yml
elasticsearch.hosts: ["http://YOUR_SERVER_IP:9200"]
bin/kibana \
1>log 2>&1 &
ssh -L 5601:YOUR_SERVER_IP:5601 -N -T SSH_USERNAME@YOUR_SERVER_IP
Logstash
vim logstash.conf
input {
tcp {
mode => "server"
host => "YOUR_SERVER_IP"
port => 4560
codec => json_lines
}
}
output {
elasticsearch {
hosts => "YOUR_SERVER_IP:9200"
index => "springboot-logstash-%{+YYYY.MM.dd}"
}
}
bin/logstash -f logstash.conf \
1>log 2>&1 &
ssh -L 4560:YOUR_SERVER_IP:4560 -N -T SSH_USERNAME@YOUR_SERVER_IP
Spring Boot
POM
https://github.com/logstash/logstash-logback-encoder
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>6.6</version>
</dependency>
application.properties
spring.application.name=YOUR_APPLICATION_NAME
logstash.address=YOUR_SERVER_IP:4560
logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false">
<!-- app name -->
<springProperty scope="context" name="APPLICATION_NAME" source="spring.application.name"/>
<!-- logstash 地址 -->
<springProperty scope="context" name="LOGSTASH_ADDRESS" source="logstash.address"/>
<!-- 日志文件的存储地址 -->
<property name="LOG_HOME" value="log"/>
<!-- 控制台输出 -->
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<!-- 格式化输出:%d 表示日期,%thread 表示线程名,%-5level 表示从左显示 5 个字符宽度的日志级别,%msg 表示日志消息,%n 表示换行符 -->
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %highlight(%-5level) %cyan(%logger{50}:%L) - %msg%n</pattern>
</encoder>
</appender>
<!-- logstash 输出 -->
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>${LOGSTASH_ADDRESS}</destination>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<pattern>
<pattern>
{
"app": "${APPLICATION_NAME}",
"thread": "%thread",
"level": "%-5level",
"logger": "%logger{50} %M %L",
"message": "%msg"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
<!-- 每天生成日志文件 -->
<appender name="ROLLING" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!-- 日志文件输出的文件名 -->
<FileNamePattern>${LOG_HOME}/%d{yyyy-MM-dd}.%i.log</FileNamePattern>
<!-- 日志文件保留天数 -->
<MaxHistory>30</MaxHistory>
<maxFileSize>10MB</maxFileSize>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50}:%L - %msg%n</pattern>
</encoder>
</appender>
<!-- 生成 error html 日志文件 -->
<appender name="HTML_ERROR" class="ch.qos.logback.core.FileAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<!-- 设置日志级别,过滤掉 info 日志,只输出 error 日志-->
<level>ERROR</level>
</filter>
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="ch.qos.logback.classic.html.HTMLLayout">
<pattern>%p%d%msg%M%F{32}%L</pattern>
</layout>
</encoder>
<file>${LOG_HOME}/error-log.html</file>
</appender>
<!-- 每天生成 html 日志文件 -->
<appender name="HTML_ROLLING" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!-- 日志文件输出的文件名 -->
<FileNamePattern>${LOG_HOME}/%d{yyyy-MM-dd}.%i.html</FileNamePattern>
<!-- 日志文件保留天数 -->
<MaxHistory>30</MaxHistory>
<MaxFileSize>10MB</MaxFileSize>
</rollingPolicy>
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="ch.qos.logback.classic.html.HTMLLayout">
<pattern>%p%d%msg%M%F{32}%L</pattern>
</layout>
</encoder>
</appender>
<!-- mybatis log config -->
<logger name="com.apache.ibatis" level="TRACE"/>
<logger name="java.sql.Connection" level="DEBUG"/>
<logger name="java.sql.Statement" level="DEBUG"/>
<logger name="java.sql.PreparedStatement" level="DEBUG"/>
<!-- 日志输出级别 -->
<root level="INFO">
<appender-ref ref="STDOUT"/>
<appender-ref ref="LOGSTASH"/>
<appender-ref ref="ROLLING"/>
<appender-ref ref="HTML_ERROR"/>
<appender-ref ref="HTML_ROLLING"/>
</root>
</configuration>
Git 文件名大小写敏感
git config --global core.ignorecase false
K3S + KubeSphere
K3S
https://docs.rancher.cn/docs/k3s/quick-start/_index
Master
安装最新稳定版本
curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -
kubeconfig 文件
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
node-token
cat /var/lib/rancher/k3s/server/node-token
Worker
安装最新稳定版本
curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -
配置 containerd 镜像
vim /etc/rancher/k3s/registries.yaml
mirrors:
"docker.io":
endpoint:
- "https://ustc-edu-cn.mirror.aliyuncs.com"
systemctl restart k3s.service
卸载 K3S
Master 卸载
/usr/local/bin/k3s-uninstall.sh
Worker 卸载
/usr/local/bin/k3s-agent-uninstall.sh
KubeSphere
https://kubesphere.io/docs/quick-start/minimal-kubesphere-on-k8s/
安装
国际互联网顺畅
k3s kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
k3s kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml
国际互联网受限
k3s kubectl apply -f https://download.fastgit.org/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
k3s kubectl apply -f https://download.fastgit.org/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml
信息
IP:30880
admin
P@88w0rd
进入面板
ssh -L 30880:127.0.0.1:30880 -N -T -v root@SERVER_IP
从 Kubernetes 上卸载 KubeSphere
https://kubesphere.io/zh/docs/installing-on-kubernetes/uninstall-kubesphere-from-k8s/
wget https://raw.githubusercontent.com/kubesphere/ks-installer/release-3.1/scripts/kubesphere-delete.sh
K3S Traefik Dashboard
kubectl get pod -n kube-system
kubectl port-forward traefik-XXX -n kube-system 9000:9000
ssh -L 9000:127.0.0.1:9000 -N -T -v root@SERVER_IP
http://localhost:9000/dashboard/
Kuboard
https://github.com/eip-work/kuboard-press/blob/master/install/v3/install-built-in.md
请不要使用 127.0.0.1
或者 localhost
作为内网 IP
sudo docker run -d \
--restart=unless-stopped \
--name=kuboard \
-p 80:80/tcp \
-p 10081:10081/tcp \
-e KUBOARD_ENDPOINT="http://内网IP:80" \
-e KUBOARD_AGENT_SERVER_TCP_PORT="10081" \
-v /root/kuboard-data:/data \
eipwork/kuboard:v3
Shell 命令
ctrl-c
发送 SIGINT 信号给前台进程组中的所有进程,强制终止程序的执行
ctrl-z
发送 SIGTSTP 信号给前台进程组中的所有进程,常用于挂起一个进程,而并非结束进程
jobs
查看当前 Shell 下运行的所有程序;带 + 表示最新的 jobs;带-表示次新的 jobs;其他 jobs 不带符号
fg
将刚挂起的命令返回前台运行,可以使用 ctrl-z 再次挂起该进程
fg %3
将第三个 job 返回前台运行
bg
将刚挂起的命令放到后台运行,无法使用 ctrl-z 再次挂起该进程
bg %3
将第三个 job 放到后台运行
kill %1
杀死挂起的第一个进程
ctrl-d
一个特殊的二进制值,表示 EOF,作用相当于在终端中输入 exit 后回车
ctrl-s
中断控制台输出
ctrl-q
恢复控制台输出
ctrl-l
清屏
command &
直接在后台运行程序
nohup
如果你希望进程在你退出帐户、关闭终端之后继续运行,可以使用 nohup 命令。长命令必须写在 Shell 文件中,否则 nohup 不起作用
nohup command &
该命令的一般形式
nohup command > out.file 2>&1 &
log 输出到 out.file,并将标准错误输出重定向到标准输出,再被重定向到 out.file
Shell 变量
$$
Shell 本身的 PID(ProcessID)
$!
Shell 最后运行的后台 Process 的 PID
$?
最后运行的命令的结束代码(返回值)
$-
使用 Set 命令设定的 Flag 一览
$*
所有参数列表。如"$*"用「"」括起来的情况、以"$1 $2 … $n"的形式输出所有参数
$@
所有参数列表。如"$@"用「"」括起来的情况、以"$1" "$2" … "$n" 的形式输出所有参数
$#
添加到 Shell 的参数个数
$0
Shell 本身的文件名
$1~$n
添加到 Shell 的各参数值。$1 是第 1 参数、$2 是第 2 参数…
安装
https://docs.docker.com/install
官网安装
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
阿里云镜像安装
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
镜像加速器
https://yeasy.gitbook.io/docker_practice/install/mirror
sudo vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://ustc-edu-cn.mirror.aliyuncs.com"]
}
Docker 清理
⚠️ 危险操作,请谨慎使用
清理所有停止运行的容器
docker container prune
清理所有悬挂 none 镜像
docker image prune
清理所有无用数据卷
docker volume prune
使用 Nexus 搭建 Docker Registry
安装
docker run -d --name nexus_docker \
--restart=always \
-p 8081:8081 \
-p 8082:8082 \
--mount src=nexus-docker-data,target=/nexus-data \
sonatype/nexus3
数据目录:/var/lib/docker/volumes
使用 SSH 隧道进行初始化
ssh -L 8081:127.0.0.1:8081 -N -T YOUR_SERVER_DOMAIN
浏览器进入 http://127.0.0.1:8081
创建 Docker Repository
Setting[小齿轮] ->
Repositories ->
Create repository ->
docker(hosted) ->
HTTP 填 8082
开启 Docker Token Realms
Setting[小齿轮] ->
Security ->
Realms ->
激活 Docker Bearer Token Realm
Nginx HTTPS
server {
server_name YOUR_SERVER_DOMAIN;
listen 443 ssl http2;
ssl_certificate /etc/ssl/YOUR_SERVER_DOMAIN.crt;
ssl_certificate_key /etc/ssl/YOUR_SERVER_DOMAIN.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
large_client_header_buffers 4 32k;
client_max_body_size 300m;
client_body_buffer_size 512k;
proxy_connect_timeout 600;
proxy_read_timeout 600;
proxy_send_timeout 600;
proxy_buffer_size 128k;
proxy_buffers 4 64k;
proxy_busy_buffers_size 128k;
proxy_temp_file_write_size 512k;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:8082;
proxy_read_timeout 900s;
}
error_page 500 502 503 504 /50x.html;
}
Docker Login
docker login YOUR_SERVER_DOMAIN -u admin -p YOUR_PASSWORD
将当前登录用户加入到 Docker 组中
sudo usermod -aG docker $(whoami)
容器自动启动
docker update --restart=always <CONTAINER ID>
获取容器 IP 地址
docker inspect --format '{{ .NetworkSettings.IPAddress }}' <CONTAINER ID or NAME>
TensorFlow
https://hub.docker.com/r/tensorflow/tensorflow
启动一个带 TensorFlow 环境的 Jupyter Notebook
docker run -d --rm --name tf \
-p 8888:8888 \
-v /docker-data/tf/notebooks:/tf/notebooks \
tensorflow/tensorflow:latest-py3-jupyter
测试
docker run -it --rm \
tensorflow/tensorflow \
python3 -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
使用一个临时的 TensorFlow 容器运行当前目录下的 model_train.py 脚本
运行结束之后容器自动删除
docker run -d --rm --name tf \
-v $PWD:/data \
-w /data \
tensorflow/tensorflow:latest \
bash -c ' pip3 install -U pip -i https://pypi.douban.com/simple && \
pip3 config set global.index-url https://pypi.douban.com/simple && \
pip3 install -U tensorflow keras pandas numpy jieba gensim fastapi uvicorn && \
python3 model_train.py 64 100 false 1>log 2>&1 '
TensorBoard
docker run -d --rm --name tf-board \
-p 6006:6006 \
-v $PWD:/data \
-w /data \
tensorflow/tensorflow:latest \
tensorboard --logdir logs/fit --host 0.0.0.0 --port 6006
https://github.com/Bitidea/bitidea-docker-compose
https://dunwu.github.io/nginx-tutorial/#/nginx-quickstart
https://www.nginx.com/blog/help-the-world-by-healing-your-nginx-configuration
反向代理
普通反向代理
server {
listen 80;
listen [::]:80;
server_name _;
location / {
proxy_pass http://127.0.0.1:8000;
}
}
跨域反向代理
跨域说明:http://www.ruanyifeng.com/blog/2016/04/cors.html
server {
listen 80;
listen [::]:80;
server_name _;
location / {
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Methods *;
add_header Access-Control-Allow-Headers *;
if ($request_method = 'OPTIONS') {
return 204;
}
proxy_pass http://127.0.0.1:8000;
}
}
负载均衡
http {
upstream backend {
# ip_hash;
server backend1.example.com;
server backend2.example.com;
}
server {
listen 80;
listen [::]:80;
server_name _;
location / {
proxy_pass http://backend;
}
}
}
高可用集群
check-nginx-pid.sh
sudo vim /usr/local/src/check-nginx-pid.sh
#!/bin/bash
CHECK_CMD=`ps -C nginx --no-header | wc -l`
RESTART_CMD=`/usr/sbin/nginx`
if [ $CHECK_CMD -eq 0 ];then
# restart nginx
$RESTART_CMD
# if restart nginx failed
if [ $CHECK_CMD -eq 0 ];then
exit 1
else
exit 0
fi
else
exit 0
fi
Master - keepalived.conf
sudo vim /etc/keepalived/keepalived.conf
global_defs {
router_id nginx_master
}
vrrp_script chk_http_port {
script "/usr/local/src/check-nginx-pid.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 66
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass p_sS
}
track_script {
chk_http_port
}
virtual_ipaddress {
172.16.91.199
}
}
Backup - keepalived.conf
sudo vim /etc/keepalived/keepalived.conf
global_defs {
router_id nginx_backup
}
vrrp_script chk_http_port {
script "/usr/local/src/check-nginx-pid.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 66
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass p_sS
}
track_script {
chk_http_port
}
virtual_ipaddress {
172.16.91.199
}
}
WSS
location /websocket {
proxy_pass http://127.0.0.1:8080/websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
通用 XML
core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/hadoop-data</value>
</property>
</configuration>
hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop3:9868</value>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop2</value>
</property>
</configuration>
树莓派编译 Hadoop 2.7.7 Native Library
JDK 8
如果已经装了可以跳过
sudo apt -y install openjdk-8-jdk
要设置 JAVA_HOME
环境变量
依赖库
sudo apt -y install maven
sudo apt -y install build-essential autoconf automake libtool cmake zlib1g-dev pkg-config libssl1.0-dev
sudo apt -y install snappy libsnappy-dev
sudo apt -y install bzip2 libbz2-dev
sudo apt -y install libjansson-dev
sudo apt -y install fuse libfuse-dev
注意 libssl
的版本必须是 1.0
解压源码
tar xvf hadoop-2.7.7-src.tar.gz
给源码打补丁
cd hadoop-2.7.7-src/hadoop-common-project/hadoop-common/src/
wget https://issues.apache.org/jira/secure/attachment/12570212/HADOOP-9320.patch
patch < HADOOP-9320.patch
cd ..
开始编译
mvn compile -Pnative -T4
检查结果
ls target/native/target/usr/local/lib/
# libhadoop.a libhadoop.so libhadoop.so.1.0.0
Linux 服务器
sudo apt install samba samba-common-bin -y
sudo mkdir -m 1777 /opt/smbshare
配置文件
sudo vim /etc/samba/smb.conf
添加到文件末尾:
[smbshare]
path = /opt/smbshare
writeable=Yes
create mask=0777
directory mask=0777
public=no
创建 SMB 用户
sudo smbpasswd -a smb
sudo systemctl restart smbd
连接
\\192.168.1.23\smbshare
Windows 10 LTSC 客户端
控制面板
->启用或关闭 Windows 功能
->SMB 1.0/CIFS File Sharing Support
Win+R
->gpedit.msc
计算机配置
->管理模板
->网络
->Lanman 工作站
->启用不安全的来宾登录
PowerShell
->Get-SmbServerConfiguration | Select EnableSMB1Protocol
可能需要重启