1. Ceph
基于 Centos-stream-9
1.1. 搭建Ceph Cluster
Ceph Cluster包含三个虚拟机:ceph1 ceph2 ceph3
ceph1 ceph2 ceph3上进行以下操作
dnf install -y epel-release
dnf install -y systemd-timesyncd
systemctl enable systemd-timesyncd --now
rpm -Uvh https://download.ceph.com/rpm-18.1.3/el9/noarch/ceph-release-1-1.el9.noarch.rpm
dnf install -y cephadm podman
dnf install -y centos-release-ceph-pacific
ceph1进行以下操作
cephadm bootstrap --mon-ip 192.168.122.60
终端输出如下内容,注意保存Password |
https://192.168.122.60:8443/
User: admin
Password: 65eoa3cblm
然后继续执行如下命令
cephadm shell
cephadm add-repo --release quincy
dnf install ceph-common
如果已经对下载cephadm的路径换了源,可使用
cephadm install ceph-common
安装ceph-common
修改下载cephadm的路径
vim /etc/yum.repos.d/ceph.repo
CN: China: http://mirrors.ustc.edu.cn/ceph/
查看ceph版本号和状态
ceph -v
ceph status
设置ceph1能无密码登陆ceph1 ceph2 ceph3
ceph cephadm get-pub-key > ~/ceph.pub
ssh-copy-id -f -i ~/ceph.pub root@ceph1
ssh-copy-id -f -i ~/ceph.pub root@ceph2
ssh-copy-id -f -i ~/ceph.pub root@ceph3
添加ceph1 ceph2 ceph3到cluster
cephadm shell -- ceph orch host add ceph1 192.168.122.60
cephadm shell -- ceph orch host add ceph2 192.168.122.61
cephadm shell -- ceph orch host add ceph3 192.168.122.62
给ceph1 ceph2 ceph3打label
cephadm shell -- ceph orch host label add ceph1 mon
cephadm shell -- ceph orch host label add ceph2 mon
cephadm shell -- ceph orch host label add ceph3 mon
添加osd
cephadm shell -- ceph orch daemon add osd ceph1:/dev/vdb
cephadm shell -- ceph orch daemon add osd ceph2:/dev/vdb
cephadm shell -- ceph orch daemon add osd ceph3:/dev/vdb
1.2. 部署CephFS
参照官网链接 https://docs.ceph.com/en/latest/cephadm/services/mds/#orchestrator-cli-cephfs 在ceph1上执行以下命令
ceph fs volume create geek_cephfs --placement=3
ceph fs ls
ceph fs volume info geek_cephfs
ceph mds stat
ceph orch apply mds geek_cephfs --placement=3
[root@ceph1 ~]# cluster: id: de382350-3221-11ee-bc03-525400b83c9a health: HEALTH_OK services: mon: 3 daemons, quorum stream960,stream961,stream962 (age 9m) mgr: stream960.vperxy(active, since 52m), standbys: stream961.ldsnmd mds: 1/1 daemons up, 2 standby osd: 3 osds: 3 up (since 8m), 3 in (since 9m) data: volumes: 1/1 healthy pools: 3 pools, 49 pgs objects: 24 objects, 451 KiB usage: 81 MiB used, 90 GiB / 90 GiB avail pgs: 49 active+clean
使用cat /etc/ceph/ceph.client.admin.keyring查看key,用于挂载cephFS |
# [root@stream960 ~]# cat /etc/cat /etc/ceph/ceph.client.admin.keyringceph/ceph.client.admin.keyring # [client.admin] # key = AQCZ4ctks3nnHxAAgGpIFheYkpdCoBERg98x3g== # caps mds = "allow *" # caps mgr = "allow *" # caps mon = "allow *" # caps osd = "allow *"
在任何主机上执行以下命令
mkdir -p /mnt/ceph
mount -t ceph 192.168.122.60:6789,192.168.122.61:6789,192.168.122.62:6789:/ /mnt/ceph -o name=admin,secret=AQCZ4ctks3nnHxAAgGpIFheYkpdCoBERg98x3g==
1.3. 部署CephRWG
安装ceph-radosgw
sudo yum install -y ceph-radosgw
1.3.1. 打标签
主要作用是根据标签决定rgw运行在那些机器上
ceph orch host label add ceph1 rgw
ceph orch host label add ceph2 rgw
ceph orch host label add ceph3 rgw
启动rgw
ceph orch apply rgw test_rgw default --placement=label:rgw --port=8000
curl测试连接rgw服务 查看域信息
radosgw-admin zone get --rgw-zone=default
连接
[source, bash]
curl http://ceph1:8000
安装s3cmd
apt install -y ceph-common s3cmd
创建用户 [Note]注意保存access_key和secret_key值
radosgw-admin user create --uid=xy --display-name=administrator --email=xy@xy.com
1.3.2. 使用S3
s3cmd --configure
# 输入access_key
KUJSQNRB5QCGGOHMB6I0
# 输入secret_key
ai4zduUFCjC83JJ4E6GqaWHFHCmsOp14qptCfZaE
# 输入域()
default
# 输入rgw访问接口
storage02:8000
# 连接存储桶域名
storage02
# 输入密码,没有加密。回车
回车
# 有没有gpg目录,安装s3服务就会有
回车
# 是否用https
No
# 是否有代理
回车
# 测试连通性,返回success代表成功
y
# 是否保存
y
创建存储桶测试
s3cmd mb s3://bucket1
查看存储桶
[source, bash]
s3cmd ls
1.3.3. 使用swift
yum install python-swiftclient
创建用户
[source, bash]
radosgw-admin user create --uid="testuser" --display-name="First User"
创建子用户
radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full
swift常用命令
swift -A http://192.168.122.106:8000/auth/1.0 -U czhswift:swift -K AquvhRPHOom3S5dWLjyJYZwOwYKUz11EiSikZqgG post swiftbucket
swift -A http://192.168.122.106:8000/auth/1.0 -U czhswift:swift -K AquvhRPHOom3S5dWLjyJYZwOwYKUz11EiSikZqgG delete swiftbucket
swift -A http://192.168.122.106:8000/auth/1.0 -U czhswift:swift -K AquvhRPHOom3S5dWLjyJYZwOwYKUz11EiSikZqgG list
swift -A http://192.168.122.106:8000/auth/1.0 -U czhswift:swift -K AquvhRPHOom3S5dWLjyJYZwOwYKUz11EiSikZqgG list swiftbucket
swift -A http://192.168.122.106:8000/auth/1.0 -U czhswift:swift -K AquvhRPHOom3S5dWLjyJYZwOwYKUz11EiSikZqgG stat
swift -A http://192.168.122.106:8000/auth/1.0 -U czhswift:swift -K AquvhRPHOom3S5dWLjyJYZwOwYKUz11EiSikZqgG upload swiftbucket 1.txt
swift -A http://192.168.122.106:8000/auth/1.0 -U czhswift:swift -K AquvhRPHOom3S5dWLjyJYZwOwYKUz11EiSikZqgG delete swiftbucket 1.txt
2. Git
2.1. 在服务器上搭建Git
在开始架设 Git 服务器前,需要把现有仓库导出为裸仓库——即一个不包含当前工作目录的仓库。 这通常是很简单的。 为了通过克隆你的仓库来创建一个新的裸仓库,你需要在克隆命令后加上 --bare 选项。 按照惯例,裸仓库的目录名以 .git 结尾,就像这样: 通常有两种办法:
从已有仓库克隆,或者创建dir,初始化为空仓库
2.1.1. 从已有仓库克隆
$ git clone --bare my_project my_project.git
Cloning into bare repository 'my_project.git'...
done.
2.1.2. 创建dir,初始化为空仓库
在某一个路径下,创建一个目录
mkdir my_project.git
进入该目录
[source, bash]
cd my_project.git
初始化这个目录,使其成为一个空仓库
[source, bash]
git init --bare --shared
2.1.3. 用户clone/push
on John’s computer
$ cd myproject
$ git init
$ git add .
$ git commit -m 'initial commit'
$ git remote add origin git@gitserver:/srv/git/project.git
$ git push origin master
此时,其他开发者可以克隆此仓库,并推回各自的改动,步骤很简单:
$ git clone git@gitserver:/srv/git/project.git
$ cd project
$ vim README
$ git commit -am 'fix for the README file'
$ git push origin master
2.2. git常用命令
2.2.1. git配置
git的配置分为命令行级配置,工作树级别命令、仓库级的配置、用户级(也称为全局级)的配置、系统级的配置 在这里主要记录仓库级、用户级、系统级三个级别配置的相关命令 优先级:仓库级>用户级>系统级
Reference: https://git-scm.com/docs/git-config#FILES .仓库级配置
#切换到需要配置的仓库下执行
#配置用户名
git config user.name "用户名"
#配置用户邮箱
git config user.email "用户邮箱"
#在任何可以执行git命令的地方
#配置用户名
git config --global user.name "用户名"
#配置用户邮箱
git config --global user.email "用户邮箱"
#在任何可以执行git命令的地方
#配置用户名
git config --system user.name "用户名"
#配置用户邮箱
git config --system user.email "用户邮箱"
还可以使用编辑器如vscode直接编辑配置文件。
仓库级配置文件:
-
linux: your_repository/.git/config
-
windows: your_repository/.git/config
用户级配置文件:
-
linux: ~/.gitconfig 或 ~/.config/git/config
-
windows: C:\Users\你的用户名\.gitconfig
系统级配置文件:
-
linux: /etc/gitconfig
-
windows: 安装路径\Git\etc\gitconfig
3. GitLab
3.1. CentOS8
3.1.1. 新增GitLab Yum仓库
添加Yum仓库
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.rpm.sh | bash
列出GitLab的仓库列表
执行一下命令
yum repolist all | grep gitlab
终端输出
gitlab_gitlab-ce/x86_64 gitlab_gitlab-ce enabled: 853 !gitlab_gitlab-ce-source gitlab_gitlab-ce-source disabled
查看可用的GitLab软件包
列出软件包列表
yum repo-pkgs gitlab_gitlab-ce list
终端输出
Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.bfsu.edu.cn * extras: mirrors.bfsu.edu.cn * updates: mirrors.cqu.edu.cn Available Packages gitlab-ce.x86_64 15.7.3-ce.0.el7 gitlab_gitlab-ce
查看软件包描述
yum --disablerepo=\* --enablerepo=gitlab_gitlab-ce info gitlab-ce
Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.bfsu.edu.cn * extras: mirrors.bfsu.edu.cn * updates: mirrors.cqu.edu.cn Available Packages Name : gitlab-ce Arch : x86_64 Version : 15.7.3 Release : ce.0.el7 Size : 1.1 G Repo : gitlab_gitlab-ce/x86_64 Summary : GitLab Community Edition (including NGINX, Postgres, Redis) URL : https://about.gitlab.com/ License : MIT Description : GitLab Community Edition (including NGINX, Postgres, Redis)
GitLab的软件包比较大,安装过程时间较长
默认禁用GitLab仓库
按需单独启用GitLab仓库 GitLab仓库(服务器位于国外)经常无法访问或者速度慢,影响Yum使用,默认禁用之
yum-config-manager --disable gitlab_gitlab-ce | egrep '(\[gitlab_gitlab-ce\])|enabled'
终端输出
#[gitlab_gitlab-ce] #enabled = 0 或 False
3.1.2. 安装GitLab软件包
安装GitLab依赖
运行 gitlab-ctl 命令时,会出现警告:
终端输出
ffi-libarchive could not be loaded, libarchive is probably not installed on system, archive_file will not be available
安装 libarchive 包,消除警告:
yum install -y libarchive
预设GitLab运行参数
预设GitLab访问URL
EXTERNAL_URL="http://gitlab.sjx.com:8181"
预设GitLab默认密码
GITLAB_ROOT_PASSWORD=$(pwgen -s 20|head -n 1)
echo -e "GitLab默认用户:root\nGitLab默认密码:${GITLAB_ROOT_PASSWORD}"
终端输出
GitLab默认用户:root GitLab默认密码:QJQCM3fAFXKYpYUeSL5e
预设GitLab访问域名
egrep '^127.0.0.1 gitlab.sjx.com$' /etc/hosts > /dev/null || echo '127.0.0.1 gitlab.sjx.com' >> /etc/hosts
127.0.0.1 gitlab.sjx.com 只为安装,无其它实质作用 域名 gitlab.sjx.com 必须能解析出IP地址,EXTERNAL_URL参数才有效 这样安装后,不再需要手动修改 /etc/gitlab/gitlab.rb 文件中的 external_url 参数
手动安装GitLab
打印GitLab软件包URL
GITLAB_RPM_URL=$(yumdownloader --disablerepo=\* --enablerepo=gitlab_gitlab-ce --urls gitlab-ce | egrep '^https://.+\.rpm$')
GITLAB_RPM_FILE=/tmp/$(basename ${GITLAB_RPM_URL})
echo -e "GitLab软件包文件:\n\t${GITLAB_RPM_FILE}\n GitLab软件包URL:\n\t${GITLAB_RPM_URL}"
终端输出
GitLab软件包文件: /tmp/gitlab-ce-15.7.3-ce.0.el7.x86_64.rpm GitLab软件包URL: https://packages.gitlab.com/gitlab/gitlab-ce/el/7/x86_64/gitlab-ce-15.7.3-ce.0.el7.x86_64.rpm
下载软件包
wget -c ${GITLAB_RPM_URL} -O ${GITLAB_RPM_FILE}
wget 因为带了 -c 参数,下载速度慢时,可以中断下载,多次重新运行上面的命令 |
安装GitLab
EXTERNAL_URL=${EXTERNAL_URL} GITLAB_ROOT_PASSWORD=${GITLAB_ROOT_PASSWORD} yum install -y ${GITLAB_RPM_FILE}
GitLab安装成功后,请手动删除下载的文件:rm -f /tmp/gitlab-ce*.rpm |
(安装时,终端输出类似 网络安装GitLab)
终端输出
Loaded plugins: fastestmirror Examining /tmp/gitlab-ce-15.6.3-ce.0.el7.x86_64.rpm: gitlab-ce-15.6.3-ce.0.el7.x86_64 Marking /tmp/gitlab-ce-15.6.3-ce.0.el7.x86_64.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package gitlab-ce.x86_64 0:15.6.3-ce.0.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================================= Package Arch Version Repository Size ============================================================================================================================= Installing: gitlab-ce x86_64 15.6.3-ce.0.el7 /gitlab-ce-15.6.3-ce.0.el7.x86_64 2.4 G Transaction Summary ============================================================================================================================= Install 1 Package Total size: 2.4 G Installed size: 2.4 G ...内容同上... ...内容同上... ...内容同上...
3.1.3. 查看GitLab服务状态
查看GitLab的系统服务状态
systemctl status gitlab-runsvdir
终端输出
● gitlab-runsvdir.service - GitLab Runit supervision process Loaded: loaded (/usr/lib/systemd/system/gitlab-runsvdir.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2023-01-14 00:30:42 CST; 11h ago Main PID: 10912 (runsvdir) CGroup: /system.slice/gitlab-runsvdir.service ├─10912 runsvdir -P /opt/gitlab/service log: ............................................. ├─10914 runsv logrotate ├─10924 svlogd -tt /var/log/gitlab/logrotate ├─10930 runsv redis ├─10932 /opt/gitlab/embedded/bin/redis-server unixsocket:/var/opt/gitlab/redis/redis.socket ├─10940 svlogd -tt /var/log/gitlab/redis ├─10950 runsv gitaly ├─10975 svlogd /var/log/gitlab/gitaly ├─11079 runsv postgresql ...省略的内容... ...省略的内容... ...省略的内容... Jan 14 00:30:42 lan_server systemd[1]: Started GitLab Runit supervision process.
查看GitLab的所有组件状态
gitlab-ctl status
终端输出
run: alertmanager: (pid 11869) 39866s; run: log: (pid 11659) 39907s run: gitaly: (pid 11728) 39876s; run: log: (pid 10975) 40076s run: gitlab-exporter: (pid 11845) 39868s; run: log: (pid 11451) 39925s run: gitlab-kas: (pid 11817) 39870s; run: log: (pid 11225) 40017s run: gitlab-workhorse: (pid 11829) 39869s; run: log: (pid 11371) 39942s run: logrotate: (pid 25010) 488s; run: log: (pid 10924) 40088s run: nginx: (pid 11399) 39940s; run: log: (pid 11415) 39938s run: node-exporter: (pid 11839) 39869s; run: log: (pid 11443) 39931s run: postgres-exporter: (pid 11879) 39866s; run: log: (pid 11682) 39903s run: postgresql: (pid 11081) 40037s; run: log: (pid 11125) 40034s run: prometheus: (pid 11854) 39867s; run: log: (pid 11630) 39914s run: puma: (pid 11287) 39958s; run: log: (pid 11294) 39957s run: redis: (pid 10932) 40084s; run: log: (pid 10940) 40082s run: redis-exporter: (pid 11847) 39868s; run: log: (pid 11469) 39919s run: sidekiq: (pid 11303) 39952s; run: log: (pid 11320) 39950s
查看GitLab默认HTTP端口
gitlab-ctl show-config 2>/dev/null | grep '"external-url":'
终端输出
"external-url": "http://gitlab.sjx.com:8181",
GitLab访问URL:http://gitlab.sjx.com:8181
3.1.4. 为GitLab新增防火墙规则
1.增加防火墙放行规则
GITLAB_PORT=8181
PERM="--permanent"
SERV_NAME=GITLAB_${GITLAB_PORT}
SERV="${PERM} --service=${SERV_NAME}"
firewall-cmd ${PERM} --new-service=${SERV_NAME}
firewall-cmd ${SERV} --set-short="GitLab ports"
firewall-cmd ${SERV} --set-description="GitLab port exceptions"
firewall-cmd ${SERV} --add-port=${GITLAB_PORT}/tcp
firewall-cmd ${PERM} --add-service=${SERV_NAME}
GITLAB_PORT GITLAB运行端口注意和 预设GitLab访问URL 的端口保持一致
2.重载防火墙配置
firewall-cmd --reload
3.查看防火墙规则
firewall-cmd --list-all
public (active) target: default icmp-block-inversion: no interfaces: enp0s3 sources: services: GITLAB_8181 dhcpv6-client ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
放行的规则中必须有 GITLAB_8181 服务 现在,可以去试一试 第一次访问GitLab Web 访问 http://gitlab.sjx.com:8181 ,完成第一次登录操作
3.2. CentOS-stream-9
-
版本:16.9.6
-
操作系统:CentOS-stream-9
#添加gitlab仓库
curl -s https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.rpm.sh | sudo bash
#安装
EXTERNAL_URL="http://gitlab.czh.com"
sudo yum install gitlab-ce-16.9.6-ce.0.el9.x86_64
输出
Running scriptlet: gitlab-ce-16.9.6-ce.0.el9.x86_64 217/221 Installing : gitlab-ce-16.9.6-ce.0.el9.x86_64 217/221 Running scriptlet: gitlab-ce-16.9.6-ce.0.el9.x86_64 217/221 Cleanup : glibc-2.34-88.el9.x86_64 218/221 Cleanup : glibc-langpack-en-2.34-88.el9.x86_64 219/221 Cleanup : glibc-gconv-extra-2.34-88.el9.x86_64 220/221 Running scriptlet: glibc-gconv-extra-2.34-88.el9.x86_64 220/221 Cleanup : glibc-common-2.34-88.el9.x86_64 221/221 Running scriptlet: gitlab-ce-16.9.6-ce.0.el9.x86_64 221/221 It looks like GitLab has not been configured yet; skipping the upgrade script. *. *. *** *** ***** ***** .****** ******* ******** ******** ,,,,,,,,,***********,,,,,,,,, ,,,,,,,,,,,*********,,,,,,,,,,, .,,,,,,,,,,,*******,,,,,,,,,,,, ,,,,,,,,,*****,,,,,,,,,. ,,,,,,,****,,,,,, .,,,***,,,, ,*,. _______ __ __ __ / ____(_) /_/ / ____ _/ /_ / / __/ / __/ / / __ `/ __ \ / /_/ / / /_/ /___/ /_/ / /_/ / \____/_/\__/_____/\__,_/_.___/ Thank you for installing GitLab! GitLab was unable to detect a valid hostname for your instance. Please configure a URL for your GitLab instance by setting `external_url` configuration in /etc/gitlab/gitlab.rb file. Then, you can start your GitLab instance by running the following command: sudo gitlab-ctl reconfigure For a comprehensive list of configuration options please see the Omnibus GitLab readme https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md Help us improve the installation experience, let us know how we did with a 1 minute survey: https://gitlab.fra1.qualtrics.com/jfe/form/SV_6kVqZANThUQ1bZb?installation=omnibus&release=16-9 ... ... python3-setuptools-53.0.0-12.el9.noarch qt5-srpm-macros-5.15.9-1.el9.noarch redhat-rpm-config-207-1.el9.noarch rust-srpm-macros-17-4.el9.noarch sombok-2.4.0-16.el9.x86_64 systemtap-sdt-devel-5.0-4.el9.x86_64 zip-3.0-35.el9.x86_64 Complete!
如果需要获取获取全面的配置选项列表,请参阅 Omnibus GitLab 的readme https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md |
vim /etc/hosts
安装gitlab的主机IP gitlab.czh.com
浏览器输入访问 http://gitlab.czh.com
默认用户: root
cat /etc/gitlab/initial_root_password | egrep '^Password'
Password: OhnSC43bG/FivMIySa/rF1tweMHi8RxbgMdZd0QDn84=
如果密码未生效,根据/etc/gitlab/initial_root_password中的指引操作 |
4. Jenkins
4.1. 新增Jenkins Yum仓库
4.1.1. 下载仓库文件
sudo wget --inet4-only \
-O /etc/yum.repos.d/jenkins.repo \
https://pkg.jenkins.io/redhat-stable/jenkins.repo
wget参数说明 --inet4-only 表示仅使用IPv4下载文件。pkg.jenkins.io 使用了IPv6解析域名,可能导致下载失败 -O 下载文件保存到指定位置 |
4.1.2. 导入仓库密钥
导入密钥
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io-2023.key
更新缓存
sudo yum upgrade
4.1.3. 默认禁用Jenkins仓库
按需单独启用Jenkins仓库
Jenkins仓库(服务器位于国外)经常无法访问或者速度慢,影响Yum使用,默认禁用之
yum-config-manager --disable jenkins | egrep '(\[jenkins\])|enabled'
终端输出
[jenkins] enabled = 0 或 False
4.2. 安装Jenkins软件包
4.2.1. 安装依赖
1.安装Java11
sudo yum install fontconfig java-17-openjdk
2.查看默认Java版本
java -version
openjdk version "17.0.6" 2023-01-17 LTS OpenJDK Runtime Environment (Red_Hat-17.0.6.0.10-3.el9) (build 17.0.6+10-LTS) OpenJDK 64-Bit Server VM (Red_Hat-17.0.6.0.10-3.el9) (build 17.0.6+10-LTS, mixed mode, sharing)
4.2.2. 安装Jenkins
yum --disablerepo=\* --enablerepo=jenkins install -y jenkins
重新加载系统管理守护进程 (systemd) 的配置文件
sudo systemctl daemon-reload
4.3. 配置Jenkins服务
4.3.1. 开机启动Jenkins
sudo systemctl enable jenkins
sudo systemctl start jenkins
sudo systemctl status jenkins
2.重载Systemd配置
systemctl daemon-reload
4.4. 为Jenkins新增防火墙规则
1.增加防火墙放行规则
YOURPORT=8080
PERM="--permanent"
SERV="$PERM --service=jenkins"
firewall-cmd $PERM --new-service=jenkins
firewall-cmd $SERV --set-short="Jenkins ports"
firewall-cmd $SERV --set-description="Jenkins port exceptions"
firewall-cmd $SERV --add-port=$YOURPORT/tcp
firewall-cmd $PERM --add-service=jenkins
firewall-cmd --zone=public --add-service=http --permanent
YOURPORT
Jenkins运行端口注意和 设置Jenkins启动参数 保持一致
2.重载防火墙配置
firewall-cmd --reload
5. ssh
5.1. 无密码登录
5.1.1. linux
#生成不带密码的密钥对
ssh-keygen -N "" -f ~/.ssh/mk
#上传公钥文件到远程主机
ssh-copy-id -i ~/.ssh/mk.pub root@192.168.2.236
#在本地Linux终端中配置私钥
cat << EOF >> ~/.ssh/config
Host 192.168.2.236
IdentityFile ~/.ssh/mk
EOF
如果是在gitlab中添加ssh公钥,则在克隆时必须在本地添加域名解析 |
5.1.2. windows
ssh-keygen -t rsa -f C:\Users\$env:USERNAME\.ssh\keyname
一路回车直到出现
SHA256:0TJsTFwaSSiX5q4oUtZhYyscvwQ4fA7Ou2h6rKiAzdo chengzenghuan@chengzenghuan-windows10 The key's randomart image is: +---[RSA 3072]----+ | =+o. | | . =+o+ | |.. = O . | |ooo.= .. + | |oo+B = S | |.==.= . | |o+++ o | |*=+ o | |%=E | +----[SHA256]-----+
下面命令需要更改192.168.2.236为你的远程主机的ip |
Add-Content -Path "C:\Users\$env:USERNAME\.ssh\config" -Value "`n"
Add-Content -Path "C:\Users\$env:USERNAME\.ssh\config" -Value "Host 192.168.2.236"
Add-Content -Path "C:\Users\$env:USERNAME\.ssh\config" -Value " IdentityFile C:\Users\$env:USERNAME\.ssh\keyname"
下面命令需要更改user@192.168.2.236为你的远程主机中要登录的用户和主机Ip |
cat C:\Users\$env:USERNAME\.ssh\keyname.pub | ssh user@192.168.2.236 "cat >> ~/.ssh/authorized_keys"
ssh user@192.168.2.236
5.2. ssh密钥对克隆git仓库
mkdir -p ~/.ssh
#一路回车直到生成
ssh-keygen -t rsa -b 4096 -C "chengzenghuan2018@gmail.com" -f ~/.ssh/keyname
- 参数说明
-
-
-t rsa 指定密钥对类型为RSA,
-
-b 4096 指定密钥长度为4096位,
-
-C "chengzenghuan@github.com" 注释该密钥对相关联的电子邮件地址(公钥文件末尾会出现该注释)
-
-f ~/.ssh/keyname 指定密钥文件的名称和路径。
-
-N "" 指定空密码
-
将公钥keyname.pub添加到github后再执行之后的操作 |
cat <<EOF >> ~/.ssh/config
Host github.com
HostName github.com
User git
IdentityFile ~/.ssh/keyname
IdentitiesOnly yes
EOF
#测试是否成功添加
ssh -T git@github.com
Hi xxxxx! You've successfully authenticated, but GitHub does not provide shell access.
如果报错: ssh: connect to host github.com port 22: Connection refused 则使用GitHub的443端口————执行以下命令以更改~/.ssh/config文件:
再次测试是否成功添加
成功后,终端输出 Hi xxxxx! You've successfully authenticated, but GitHub does not provide shell access. |
5.3. cmd无法识别ssh
-
操作系统:Windows10
配置环境变量时,误操作导致cmd中无法使用ssh命令
添加环境变量即可解决。
此电脑→右键→属性→关于→高级系统设置→高级→环境变量
在系统变量Path中添加 C:\Windows\System32\OpenSSH;
添加前:%JAVA_HOME%\bin;%JAVA_HOME%\jre\bin;
添加后:%JAVA_HOME%\bin;%JAVA_HOME%\jre\bin;
C:\Windows\System32\OpenSSH;
5.4. ssh连接优化
-
操作系统:centos7
1.修改SSH服务端配置文件(sshd_config) ,加速登录速度、设置连接保持等
# SSH连接时,服务端会将客户端IP反向解析为域名,导致登录过程缓慢
sed -i "s/#UseDNS yes/UseDNS no/" /etc/ssh/sshd_config
sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/" /etc/ssh/sshd_config
sed -i "s/GSSAPICleanupCredentials yes/GSSAPICleanupCredentials no/" /etc/ssh/sshd_config
# 登录失败最大尝试次数
sed -i "s/#MaxAuthTries 6/MaxAuthTries 10/" /etc/ssh/sshd_config
# 连接保持:服务端每隔N秒发送一次请求给客户端,客户端响应后完成一次连接保持检查
sed -i "s/#ClientAliveInterval 0/ClientAliveInterval 30/" /etc/ssh/sshd_config
# 连接保持最大重试次数:服务端发出请求后,超过N次没有收到客户端响应,服务端则断开客户端连接
sed -i "s/#ClientAliveCountMax 3/ClientAliveCountMax 10/" /etc/ssh/sshd_config
2.重载SSH服务端配置
systemctl reload sshd
3.退出当前Shell,重新登录SSH后,新配置生效
6. prometheus
在本例中需要两个虚拟机,分别安装prometheus和node_exporter
-
在两台虚拟机中均执行以下命令
#更新系统
yum update -y
#设置时间同步
yum install -y chrony
systemctl enable chronyd
systemctl start chronyd
systemctl status chronyd
#关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
6.1. Prometheus
-
在准备安装Prometheus的主机上执行以下操作
Reference: https://prometheus.io/docs/prometheus/latest/getting_started/#starting-up-some-sample-targets
cd ~
wget https://github.com/prometheus/prometheus/releases/download/v2.49.1/prometheus-2.49.1.linux-amd64.tar.gz
tar -zxvf prometheus-2.49.1.linux-amd64.tar.gz -C /opt/
cd /opt/prometheus-*
执行下面命令检测是否可以正常使用
./prometheus --help
抓取和监控prometheus自身的健康状况。
cat << EOF > prometheus.yml
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label 'job=<job_name>' to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
EOF
./prometheus --config.file=prometheus.yml
在搜索框中输入
prometheus_target_interval_length_seconds
点击‘Execute’,正常情况应该返回许多不同的时间序列

#重载prometheus配置文件
kill -s SIGHUP `pgrep -f prometheus`
#关闭prometheus
kill -s SIGTERM `pgrep -f prometheus`
6.2. prometheus-systemctl
将prometheus添加到系统服务,便于用systemctl管理
cat <<EOF >/usr/lib/systemd/system/prometheus.service
[Unit]
Description = Prometheus server daemon
[Service]
Type = simple
WorkingDirectory = /opt/prometheus-2.49.1.linux-amd64
ExecStart = /opt/prometheus-2.49.1.linux-amd64/prometheus --config.file=/opt/prometheus-2.49.1.linux-amd64/prometheus.yml
ExecStop = /bin/kill -s SIGTERM \$MAINPID
ExecReload = /bin/kill -s SIGHUP \$MAINPID
[Install]
WantedBy=multi-user.target
EOF
# 重载systemd管理器配置
sudo systemctl daemon-reload
sudo systemctl start prometheus.service
sudo systemctl enable prometheus.service
sudo systemctl reload prometheus.service
6.3. node_exporter
-
在准备安装node_exporter的主机上执行以下操作
node_exporter浏览器下载地址: https://prometheus.io/download/#node_exporter
cd ~
wget https://github.com/prometheus/node_exporter/releases/download/v1.7.0/node_exporter-1.7.0.linux-amd64.tar.gz
tar -xzvf node_exporter-*.*.tar.gz -C /opt/
cd /opt/node_exporter-*.*
cd /opt/node_exporter-*.*
./node_exporter
...... ...... ts=2024-01-31T09:01:28.677Z caller=node_exporter.go:117 level=info collector=xfs ts=2024-01-31T09:01:28.677Z caller=node_exporter.go:117 level=info collector=zfs ts=2024-01-31T09:01:28.677Z caller=tls_config.go:274 level=info msg="Listening on" address=127.0.0.1:9101 ts=2024-01-31T09:01:28.677Z caller=tls_config.go:310 level=info msg="TLS is enabled." http2=true address=127.0.0.1:9100
6.4. node_exporter-systemctl
Ctrl-C
停止程序,将node_exporter添加到系统服务,便于用systemctl管理
cat << EOF > /usr/lib/systemd/system/node_exporter.service
[Unit]
Description = node_exporter server daemon
[Service]
Type = simple
ExecStart = /opt/node_exporter-1.7.0.linux-amd64/node_exporter
[Install]
WantedBy=multi-user.target
EOF
# 重载systemd管理器配置
sudo systemctl daemon-reload
sudo systemctl start node_exporter
sudo systemctl enable node_exporter
sudo systemctl status node_exporter
6.5. Prometheus获取node_exporter数据
-
在prometheus主机上操作
cat <<EOF >>/etc/hosts
xxx.xxx.xxx.xxx node_exporter1
EOF
必须在prometheus所在主机上设置域名解析,否则会无法获取node_exporter信息 |
添加以下配置信息到/opt/prometheus-2.49.1.linux-amd64/prometheus.yml
cat <<EOF> /opt/prometheus-2.49.1.linux-amd64/prometheus.yml
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label 'job=<job_name>' to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'node_exporter'
static_configs:
- targets: ['node_exporter1:9100']
EOF
sudo systemctl reload prometheus.service
访问 http://部署prometheus的主机Ip:9090/ Status→Targets

6.6. prometheusSql
#cpu使用率
(1 - sum(rate(node_cpu_seconds_total{mode="idle"}[1m])) by (instance) / sum(rate(node_cpu_seconds_total[1m])) by (instance) ) * 100
#内存占用大小
node_memory_MemTotal_bytes{instance="node_exporter1:9100", job="node_exporter"} - node_memory_MemFree_bytes{instance="node_exporter1:9100", job="node_exporter"} - node_memory_Cached_bytes{instance="node_exporter1:9100", job="node_exporter"} - node_memory_Buffers_bytes{instance="node_exporter1:9100", job="node_exporter"}
#硬盘占用情况
node_filesystem_size_bytes{instance="node_exporter1:9100", job="node_exporter", device!~"vmhgfs-fuse"} - node_filesystem_avail_bytes{instance="node_exporter1:9100", job="node_exporter"}
6.7. 安装mysql exporter
wget https://github.com/prometheus/mysqld_exporter/releases/download/v0.15.0/mysqld_exporter-0.15.0.linux-amd64.tar.gz
tar xvfz mysqld_exporter-0.15.0.linux-amd64.tar.gz
[pr@zabbix Download]$ ls mysqld_exporter-0.15.0.linux-amd64 prometheus-2.46.0.linux-amd64 mysqld_exporter-0.15.0.linux-amd64.tar.gz prometheus-2.46.0.linux-amd64.tar.gz
cd mysqld_exporter-0.15.0.linux-amd64
root登陆数据库
mysql -uroot -ppassword
创建用户
#更改密码策略,否则简单密码password不可用
mysql>SET GLOBAL validate_password.policy = 0;
mysql>CREATE USER 'exporter'@'localhost' IDENTIFIED BY 'password' WITH MAX_USER_CONNECTIONS 3;
mysql>GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'localhost';
mysql>quit;
创建编辑文件.my.cnf
echo << EOF > .my.cnf
[client]
user=exporter
password=password
EOF
进入prometheus-2.46.0.linux-amd64目录下
cd prometheus-2.46.0.linux-amd64/
echo << EOF >> prometheus.yml
- job_name: "mysqld"
static_configs:
- targets: ["localhost:9104"]
EOF
./mysqld_exporter --config.my-cnf=.my.cnf
7. Grafana
7.1. Grafana
下载地址:https://grafana.com/grafana/download Reference: https://grafana.com/docs/grafana/latest/setup-grafana/start-restart-grafana/
-
Version: 10.3.1
-
Edition: Enterprise
-
Release Date: 2024-01-24
cd ~
#通过链接安装Grafana
sudo yum install -y https://dl.grafana.com/enterprise/release/grafana-enterprise-10.3.1-1.x86_64.rpm
#或使用下载好的安装包安装Grafana
sudo yum install -y grafana-enterprise-10.3.1-1.x86_64.rpm
sudo systemctl daemon-reload
sudo systemctl start grafana-server
sudo systemctl status grafana-server
sudo systemctl enable grafana-server.service
如需进一步配置Grafana,请参考: 官方教程
8. Zabbix
8.1. 选择Zabbix服务器的平台
选择zabbix服务器平台以获得对应的rpm包
选择的服务器平台不同,操作也会不同,本教程选择如下 |

8.2. 安装配置zabbix
如果你安装了EPEL,则需要禁用EPEL提供的zabbix包 |
编辑配置文件 /etc/yum.repos.d/epel.repo
vim /etc/yum.repos.d/epel.repo
然后添加下列内容
[epel] ... excludepkgs=zabbix*
1.安装zabbix仓库
rpm -Uvh https://repo.zabbix.com/zabbix/6.4/rhel/9/x86_64/zabbix-release-6.4-1.el9.noarch.rpm
dnf clean all
2.安装Zabbix server,Web前端,agent
dnf install zabbix-server-mysql zabbix-web-mysql zabbix-apache-conf zabbix-sql-scripts zabbix-selinux-policy zabbix-agent
3.创建初始数据库 需要确保有数据库服务已启动并运行 在数据库主机上运行以下代码。
mysql -uroot -p
password
mysql> SET GLOBAL validate_password.policy = 0;
mysql> create database zabbix character set utf8mb4 collate utf8mb4_bin;
mysql> create user zabbix@localhost identified by 'password';
mysql> grant all privileges on zabbix.* to zabbix@localhost;
mysql> set global log_bin_trust_function_creators = 1;
mysql> quit;
导入初始架构和数据,系统将提示您输入新创建的密码。
zcat /usr/share/zabbix-sql-scripts/mysql/server.sql.gz | mysql --default-character-set=utf8mb4 -uzabbix -p zabbix
在导入数据库模式后禁用日志bin信任功能创建者选项。
mysql -uroot -p
password
mysql> set global log_bin_trust_function_creators = 0;
mysql> quit;
4.为Zabbix server配置数据库 编辑配置文件 /etc/zabbix/zabbix_server.conf
vim /etc/zabbix/zabbix_server.conf
DBPassword=password
5.启动Zabbix server和agent进程 启动Zabbix server和agent进程,并为它们设置开机自启:
systemctl restart zabbix-server zabbix-agent httpd php-fpm
systemctl enable zabbix-server zabbix-agent httpd php-fpm
6.打开zabbix的web UI 使用Apache web server时,Zabbix UI的默认URL是 http://host/zabbix
8.3. 快速开始
1.登录和配置用户
概览:
本章节你将学会如何登录和设立Zabbix中的系统用户

输入以下用户名和密码登录为Zabbix超级用户
用户名:Admin 密码:zabbix
最后进入zabbix首页

8.4. Zabbix+Mysql
8.4.1. Overview
该template的设计是为了让Zabbix通过Zabbix agent监控MySQL更简单 ,不需要其他任何插件就可以实现
8.4.2. Requirements
Zabbix version: 6.4 and higher.
8.4.3. Tested versions
该template 测试在:
-
mysql Ver 8.0.34
-
Zabbix version: 6.4
8.4.4. Setup
2.复制userparameter_mysql.conf到另一个目录下并改名为template_db_mysql.conf
cp /usr/share/doc/zabbix-agent/userparameter_mysql.conf /etc/zabbix/zabbix_agentd.d/template_db_mysql.conf
3.重启Zabbix agent
systemctl restart zabbix-agent
4.创建一个MySQL user用于监控
mysql>CREATE USER 'zbx_monitor'@'%' IDENTIFIED BY 'password';
mysql>GRANT REPLICATION CLIENT,PROCESS,SHOW DATABASES,SHOW VIEW ON *.* TO 'zbx_monitor'@'%';
5.创建/var/lib/zabbix/.my.cnf
mkdir /var/lib/zabbix
cat << EOF >/var/lib/zabbix/.my.cnf
[client]
user='zbx_monitor'
password='password'
EOF
8.4.5. Start
进入zabbix主页 http://host/zabbix

左侧菜单栏→monitoring→Hosts→Create host

→update
创建完成后如图,有名为mysql的host

点击该主机名,查看详细数据,点击item


9. zhiyan-mod-letsencrypt
9.1. 配置zhiyan-mod-letsencrypt
cd /home/czh/workspace/github
git clone ssh://git@git.cdgeekcamp.com:4295/zhiyanmodule/zhiyan-mod-letsencrypt.git
cd zhiyan-mod-letsencrypt
pycharm .
将conf文件下的letsencrypt.conf.sample复制一份并改名为letsencrypt.conf,并参照以下提示进行更改 找到下面参数:
1.将
dry_run = no
改为
dry_run = yes
2.将
language_file=/opt/gc/zy/etc/language/zh_CN/letsencrypt.json
改为
language_file=/home/czh/workspace/github/zhiyan-mod-letsencrypt/language/zh_CN/letsencrypt.json
将该文件中的所有
level = INFO
改为
level = TRACE
将需要分析的log文件保存到本地的/var/log/letsencrypt/下(需要分析的log文件通常在服务器的/var/log/letsencrypt目录下)
mkdir -p /var/log/letsencrypt/
scp root@8.210.45.121:/var/log/letsencrypt/letsencrypt.log /var/log/letsencrypt/
将已完成证书签发的服务器上的/etc/letsencrypt目录保存到本地的/etc/letsencrypt
scp -r root@8.210.45.121:/etc/letsencrypt /etc/letsencrypt
9.2. 安装zySDK
1.克隆所需的项目
cd /home/czh/workspace/github
git clone ssh://git@git.cdgeekcamp.com:4295/zhiyan/libzygrpc.git
git clone ssh://git@git.cdgeekcamp.com:4295/zhiyan/libzymod-python.git
2.进入libzymod-python目录进行以下操作
cd /home/czh/workspace/github/libzymod-python/
git checkout dev
git pull origin dev
sh build.sh
cd dist/
pip install zymod-0.0.2.3-py3-none-any.whl
3.安装以下版本的依赖
pip install grpcio==1.57.0
pip install grpcio-tools==1.57.0
pip install protobuf==4.21.12
pip install setuptool==68.0.0
setuptool可以使用更低版本如 |
4.在libzygrpc目录下进行以下操作
cd /home/czh/workspace/github/libzygrpc/
git checkout dev
git pull origin dev
cd python/
sh build.sh
cd dist/
pip install zygrpc-0.0.1.15-py3-none-any.whl
现在准备好了所有环境,回到pycharm查看是否还有依赖不存在的问题,没有报错后,回到zhiyan-mod-letsencrypt目录下继续操作
cd /home/czh/workspace/github/zhiyan-mod-letsencrypt
python zymod_letsencrypt.py -c /home/czh/workspace/github/zhiyan-mod-letsencrypt/conf/letsencrypt.conf
[czh@archlinux zhiyan-mod-letsencrypt]$ python zymod_letsencrypt.py -c /home/czh/workspace/github/zhiyan-mod-letsencrypt/conf/letsencrypt.conf 2023-08-28 19:55:45 26872 [INFO] 未启用日志配置文件,加载默认设置 2023-08-28 19:55:45 26872 [INFO] 日志配置文件 '/home/czh/workspace/github/zhiyan-mod-letsencrypt/conf/letsencrypt.conf' 加载成功 2023-08-28 19:55:45 26872 [INFO] 查找自动续签定时任务设置:当前模式->systemd 2023-08-28 19:55:45 26872 [TRACE] Enter function: get_timer_prop 2023-08-28 19:55:45 26872 [TRACE] Enter function: __calc_next_elapse 2023-08-28 19:55:45 26872 [TRACE] input->now_ts=1693223745.7171829 2023-08-28 19:55:45 26872 [TRACE] input->now_monotonic_ts=39455.986456076 2023-08-28 19:55:45 26872 [TRACE] input->next_usec=1693238400000000 2023-08-28 19:55:45 26872 [TRACE] input->next_monotonic_usec=0 2023-08-28 19:55:45 26872 [TRACE] var->next_ts=1693238400.0 2023-08-28 19:55:45 26872 [TRACE] var->next_monotonic_ts=0.0 2023-08-28 19:55:45 26872 [TRACE] output->result=1693238400.0 2023-08-28 19:55:45 26872 [TRACE] Exit function: __calc_next_elapse 2023-08-28 19:55:45 26872 [TRACE] output->result=ZySystemdTimerProp(timer_name='letsencrypt.timer', unit_name='letsencrypt.service', timers_calendar=[('OnCalendar', '*-*-* 00:00:00')], next_elapse=datetime.datetime(2023, 8, 29, 0, 0), last_trigger=datetime.datetime(1970, 1, 1, 8, 0), result='success', persistent=True, wake_system=False) 2023-08-28 19:55:45 26872 [TRACE] Exit function: get_timer_prop 2023-08-28 19:55:45 26872 [TRACE] Enter function: get_last_result_from_log 2023-08-28 19:55:45 26872 [TRACE] input->log_file=/var/log/letsencrypt/letsencrypt.log 2023-08-28 19:55:45 26872 [TRACE] var->_log_file=/var/log/letsencrypt/letsencrypt.log 2023-08-28 19:55:45 26872 [TRACE] var->result=(True, datetime.datetime(2023, 8, 25, 9, 33, 19)) 2023-08-28 19:55:45 26872 [TRACE] output->result=(True, datetime.datetime(2023, 8, 25, 9, 33, 19), '/var/log/letsencrypt/letsencrypt.log') 2023-08-28 19:55:45 26872 [TRACE] Exit function: get_last_result_from_log 2023-08-28 19:55:45 26872 [TRACE] var->last_run=1970-01-01 08:00:00 2023-08-28 19:55:45 26872 [TRACE] var->next_running=2023-08-29 00:00:00 2023-08-28 19:55:45 26872 [TRACE] Enter function: mod_send_request_grpc 2023-08-28 19:55:45 26872 [TRACE] var->name=letsencrypt 2023-08-28 19:55:45 26872 [TRACE] var->datetime=2023-08-28 11:55:45.724902+00:00 2023-08-28 19:55:45 26872 [DEBUG] content= { "Certificates": [ { "Certificate": { "Issued By": { "Common Name": "R3", "Organization": "Let's Encrypt", "Organization Unit": "<未包含在证书中>" }, "Issued To": { "Common Name": "*.chengzenghuan.asia", "Organization": "<未包含在证书中>", "Organization Unit": "<未包含在证书中>" }, "Subject Alternative Name": { "DNS Names": [ "*.chengzenghuan.asia" ] }, "Validity Period": { "Expires On": "2023-11-23 05:26:20", "Issued On": "2023-08-25 05:26:21", "Time Left": "86天17时30分34秒" } }, "Certificate Path": "/etc/letsencrypt/live/chengzenghuan.asia/fullchain.pem", "Domain": "*.chengzenghuan.asia", "Private Key Path": "/etc/letsencrypt/live/chengzenghuan.asia/privkey.pem", "Private Key Type": "ECDSA", "Root Certificate Path": "/etc/letsencrypt/live/chengzenghuan.asia/chain.pem" } ], "CertificatesTitleColName": "Domain", "RenewalTimerState": { "Activate": "letsencrypt.service", "LastRan": "1970-01-01 08:00:00", "LastRanResult": true, "Left": "4 h, 4 min, 14 sec", "NextRunning": "2023-08-29 00:00:00", "Passed": "19597 days, 11 h, 55 min, 45 sec", "RenewalLogFile": "/var/log/letsencrypt/letsencrypt.log", "RenewalResult": true, "RenewalTime": "2023-08-25 09:33:19", "SystemTime": "2023-08-28 19:55:45", "TimerName": "letsencrypt.timer" } } 2023-08-28 19:55:45 26872 [INFO] zymod:试运行中,不进行注册..... 2023-08-28 19:55:45 26872 [TRACE] Exit function: mod_send_request_grpc
10. zhiyan-mod-php-fpm
10.1. 克隆 zhiyan-mod-php-fpm、libzygrpc、libzymod-rust
cd /home/czh/workspace/github/ZhiYanModule
git clone ssh://git@git.cdgeekcamp.com:4295/zhiyanmodule/zhiyan-mod-php-fpm.git
cd zhiyan-mod-php-fpm
git checkout dev
git pull origin dev
cd /home/czh/workspace/github/ZhiYan
git clone ssh://git@git.cdgeekcamp.com:4295/zhiyan/libzymod-rust.git
cd libzymod-rust
git checkout dev
git pull origin dev
git clone ssh://git@git.cdgeekcamp.com:4295/zhiyan/libzygrpc.git
cd libzygrpc
git checkout dev
git pull origin dev
每个项目都要切换分支并拉取最新代码 |
10.1.1. 配置zhiyan-mod-php-fpm
cd /home/czh/workspace/github/ZhiYan
cat << EOF > Cargo.toml
[workspace]
members = ["libzymod-rust"]
exclude = ["libzygrpc", "nginx-access-log-parser", "nginx-error-log-parser"]
[patch]
[patch.crates-io]
[patch.crates-io.libzymod-rust]
path = "/home/czh/workspace/github/ZhiYan/libzymod-rust"
[patch.crates-io.libzygrpc]
path = "/home/czh/workspace/github/ZhiYan/libzygrpc/rust"
EOF
cd /home/czh/workspace/github/ZhiYanModule
cat << EOF >Cargo.toml
[workspace]
members = [
"zhiyan-mod-php-fpm",
]
[patch.crates-io]
libzymod-rust = { path = '/home/czh/workspace/github/ZhiYan/libzymod-rust' }
libzygrpc = { path = '/home/czh/workspace/github/ZhiYan/libzygrpc/rust' }
EOF
10.2. RUST
RUST安装与使用参照 此文档 |
10.3. 编译zhiyan-mod-php-fpm并运行
cd /home/czh/workspace/github/ZhiYan/libzygrpc/rust
p cargo build
cd /home/czh/workspace/github/ZhiYan/libzymod-rust
p cargo build
cd /home/czh/workspace/github/ZhiYanModule/zhiyan-mod-php-fpm
p cargo build
切换到/home/czh/workspace/github/ZhiYanModule/zhiyan-mod-php-fpm目录,执行以下命令可以得到一个可执行文件
cd /home/czh/workspace/github/ZhiYanModule/zhiyan-mod-php-fpm
cargo run
warning: `zhiyan-mod-php-fpm` (bin "zhiyan-mod-php-fpm") generated 1 warning Finished dev [unoptimized + debuginfo] target(s) in 0.08s Running `/home/czh/workspace/github/ZhiYanModule/target/debug/zhiyan-mod-php-fpm` 2023-08-29 19:29:08 30984 [WARN] 检测到日志配置文件'/opt/gc/zy/etc/php-fpm_log.yaml'不存在,将加载默认设置(Level:Debug) 2023-08-29 19:29:08 30984 [ERROR] 检测到智眼模块配置文件'/opt/gc/zy/etc/php-fpm.conf'不存在
可执行文件为
/home/czh/workspace/github/ZhiYanModule/target/debug/zhiyan-mod-php-fpm
执行以下命令就可以查看帮助信息
/home/czh/workspace/github/ZhiYanModule/target/debug/zhiyan-mod-php-fpm --help
更改zhiyan-mod-php-fpm/conf下的配置文件
cd /home/czh/workspace/github/ZhiYanModule/zhiyan-mod-php-fpm/conf
mv php-fpm.conf.sample php-fpm.conf
mv php-fpm.log.yaml.sample php-fpm.log.yaml
1.修改php-fpm.conf文件
vim php-fpm.conf
更改以下内容
将
language_file=/opt/gc/zy/etc/language/zh_CN/php-fpm.json
改为
language_file=/home/czh/workspace/github/ZhiYanModule/zhiyan-mod-php-fpm/language/zh_CN/php-fpm.json
2.修改php-fpm.log.yaml
vim php-fpm.log.yaml
将 level: error 改为 level: trace
mkdir -p /var/log/php
touch /var/log/php/errors.log
chown -R czh:czh /var/log/php/errors.log
cd /home/czh/workspace/github/ZhiYanModule/zhiyan-mod-php-fpm
mkdir var/log
touch var/log/php-fpm.log/php-fpm.log
执行
/home/czh/workspace/github/ZhiYanModule/target/debug/zhiyan-mod-php-fpm -c /home/czh/workspace/github/ZhiYanModule/zhiyan-mod-php-fpm/conf/php-fpm.conf -l /home/czh/workspace/github/ZhiYanModule/zhiyan-mod-php-fpm/conf/php-fpm.log.yaml
[czh@archlinux php-fpm.d]$ /home/czh/workspace/github/ZhiYanModule/target/debug/zhiyan-mod-php-fpm -c /home/czh/workspace/github/ZhiYanModule/zhiyan-mod-php-fpm/conf/php-fpm.conf -l /home/czh/workspace/github/ZhiYanModule/zhiyan-mod-php-fpm/conf/php-fpm.log.yaml log4rs: error deserializing appender file: Permission denied (os error 13) log4rs: Reference to nonexistent appender: `file` 2023-08-29 19:57:48 32333 [INFO] 日志配置文件'/home/czh/workspace/github/ZhiYanModule/zhiyan-mod-php-fpm/conf/php-fpm.log.yaml'加载成功。 2023-08-29 19:57:48 32333 [INFO] Code:"1",Messages:"phpfpm模块注册失败,Agent连接失败,十秒后进行下一次尝试,Error Message:transport error" 2023-08-29 19:57:58 32333 [INFO] Code:"1",Messages:"phpfpm模块注册失败,Agent连接失败,十秒后进行下一次尝试,Error Message:transport error"
安装配置php-fpm
sudo pacman -Syy
sudo pacman -S extra/php-fpm
systemctl enable php-fpm.service
systemctl start php-fpm.service
systemctl status php-fpm.service
vim /etc/php/php-fpm.d/www.conf
删除258行的第一个符号‘;’
255 ; anything, but it may not be a good idea to use the .php extension or it 256 ; may conflict with a real PHP file. 257 ; Default Value: not set 258 ;pm.status_path = /status 259 260 ; The address on which to accept FastCGI status request. This creates a new 261 ; invisible pool that can handle requests independently. This is useful
重启php-fpm.service
systemctl restart php-fpm.service
配置nginx
vim /etc/nginx/nginx.conf
在文件末尾最后一个}符号的上一行添加如下内容
server { listen 8023; server_name _; root /usr/share/nginx/html; location ~ ^/status$ { fastcgi_pass unix://run/php-fpm/php-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; allow 127.0.0.1; deny all; } }
11. grpc+python
11.1. 快速开始
11.1.1. 环境准备
以下命令匀在root下执行 |
python -m ensurepip
检查当前Python环境中是否已经安装了pip。如果没有安装,则会自动下载并安装最新版本的pip
|
更新pip版本
python -m pip install --upgrade pip
11.1.2. 安装gRPC
python -m pip install grpcio
11.1.3. gRPC tools
python -m pip install grpcio-tools
11.1.4. 下载example
git clone -b v1.57.0 --depth 1 --shallow-submodules https://github.com/grpc/grpc
cd grpc/examples/python/helloworld
11.1.5. 运行 gRPC 应用
进入 examples/python/helloworld
目录
1.运行server
python greeter_server.py
[root@gitserver helloworld]# python greeter_server.py Server started, listening on 50051
2.在另一个终端同一目录下运行client
python greeter_client.py
[root@gitserver helloworld]# python greeter_client.py Will try to greet world ... Greeter client received: Hello, you!
11.1.6. 更新gRPC service
vim examples/protos/helloworld.proto
添加
// Sends another greeting
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
到下列文字标识处
// The greeting service definition. service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply) {} *添加到此处* //rpc SayHelloStreamReply (HelloRequest) returns (stream HelloReply) {} }
添加后变成
// The greeting service definition. service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply) {} // Sends another greeting rpc SayHelloAgain (HelloRequest) returns (HelloReply) {} //rpc SayHelloStreamReply (HelloRequest) returns (stream HelloReply) {} }
11.1.7. 生成gRPC代码
进入 examples/python/helloworld
目录,执行下列命令
python -m grpc_tools.protoc -I../../protos --python_out=. --pyi_out=. --grpc_python_out=. ../../protos/helloworld.proto
11.1.8. 更新并运行该应用
1.更新server 在相同目录下,编辑greeter_server.py
vim greeter_server.py
class Greeter(helloworld_pb2_grpc.GreeterServicer): def SayHello(self, request, context): return helloworld_pb2.HelloReply(message=f'Hello, {request.name}!') def SayHelloAgain(self, request, context): return helloworld_pb2.HelloReply(message=f'Hello again, {request.name}!') ...
在上面代码中,已经添加了下面这段代码
def SayHelloAgain(self, request, context): return helloworld_pb2.HelloReply(message=f'Hello again, {request.name}!')
2.更新client 在相同目录下,编辑greeter_client.py
vim greeter_client.py
def run(): with grpc.insecure_channel('localhost:50051') as channel: stub = helloworld_pb2_grpc.GreeterStub(channel) response = stub.SayHello(helloworld_pb2.HelloRequest(name='you')) print("Greeter client received: " + response.message) response = stub.SayHelloAgain(helloworld_pb2.HelloRequest(name='you')) print("Greeter client received: " + response.message)
上面代码中,已经添加了下面这段代码
response = stub.SayHelloAgain(helloworld_pb2.HelloRequest(name='you')) print("Greeter client received: " + response.message)
3.运行
到 examples/python/helloworld
这个目录下
(1)运行server
python greeter_server.py
[root@gitserver helloworld]# python greeter_server.py Server started, listening on 50051
在另一个终端中的相同目录下
(2)运行client
python greeter_client.py
[root@gitserver helloworld]# python greeter_client.py Will try to greet world ... Greeter client received: Hello, you! Greeter client received: Hello again, you!
12. zhiyan-mod-iptables
12.1. zhiyan-mod-iptables
su root
python -m pip install --upgrade pip
pip install python-iptables --break-system-packages
python
>>> import iptc
>>> iptc.easy.dump_chain('filter', 'OUTPUT', ipv6=False)
[{'target': 'LIBVIRT_OUT', 'counters': (100462, 12548250)}]
输出nat table
>>> iptc.easy.dump_table('nat', ipv6=False)
>>> iptc.easy.dump_table('nat', ipv6=False) {'PREROUTING': [{'addrtype': {'dst-type': 'LOCAL'}, 'target': ..... ..... ..... ..... ..... ..... 24', 'target': 'MASQUERADE', 'counters': (0, 0)}]}
13. zhiyan
13.1. zhiyan-web-flutter
pacman -S flutter
cd /home/czh/workspace/github/ZhiYanModule
git clone ssh://git@git.cdgeekcamp.com:4295/zhiyan/zhiyan-web-flutter.git
cd zhiyan-web-flutter
git checkout dev
git pull origin dev
flutter build web --release --web-renderer html
Font asset "CupertinoIcons.ttf" was tree-shaken, reducing it from 283452 to 1272 bytes (99.6% reduction). Tree-shaking can be disabled by providing the --no-tree-shake-icons flag when building your app. Font asset "MaterialIcons-Regular.otf" was tree-shaken, reducing it from 1645184 to 10028 bytes (99.4% reduction). Tree-shaking can be disabled by providing the --no-tree-shake-icons flag when building your app. Compiling lib/main.dart for the Web... 26.4s
cd build/web
python -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
访问链接 http://0.0.0.0:8000/

13.2. zhiyan-web-server
cd /home/czh/workspace/github/ZhiYanModule
git clone ssh://git@git.cdgeekcamp.com:4295/zhiyan/zhiyan-web-server.git
cd zhiyan-web-server
cp src/main/resources/application.properties.sample src/main/resources/application.properties
vim src/main/resources/application.properties
更改
spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/zhiyan spring.datasource.username=zy spring.datasource.password=geek
更改
logging.config=/home/czh/workspace/temp/zhiyan-web-server/src/main/resources/logback.xml # 用户头像保存目录 application.config.user-photo-save-dir=/home/czh/workspace/temp/zhiyan-web-server/var/images/avatar
cp src/main/resources/logback.xml.sample src/main/resources/logback.xml
vim src/main/resources/logback.xml
更改
<file>/home/czh/workspace/temp/zhiyan-web-server/var/log/web-server.log</file>
mvn spring-boot:run
终端末尾输出

13.3. Cargo.toml
cd /home/czh/workspace/github/ZhiYan
cat << EOF > Cargo.toml
[workspace]
members = ["libzymod-rust"]
exclude = ["libzygrpc", "nginx-access-log-parser", "nginx-error-log-parser"]
[patch]
[patch.crates-io]
[patch.crates-io.libzymod-rust]
path = "/home/czh/workspace/github/ZhiYan/libzymod-rust"
[patch.crates-io.libzygrpc]
path = "/home/czh/workspace/github/ZhiYan/libzygrpc/rust"
EOF
cd /home/czh/workspace/github/ZhiYanModule
cat << EOF >Cargo.toml
[workspace]
members = [
"zhiyan-mod-php-fpm",
]
[patch.crates-io]
libzymod-rust = { path = '/home/czh/workspace/github/ZhiYan/libzymod-rust' }
libzygrpc = { path = '/home/czh/workspace/github/ZhiYan/libzygrpc/rust' }
EOF
13.4. zhiyan-agent
cd /home/czh/workspace/github/ZhiYanModule
git clone ssh://git@git.cdgeekcamp.com:4295/zhiyan/zhiyan-agent.git
cd zhiyan-agent
git checkout dev
git pull origin dev
cp conf/agent.conf.sample conf/agent.conf
vim conf/agent.conf
更改conf/agent.conf文件中的以下内容
agent_host=192.168.2.134 server_host=192.168.2.134 host=192.168.2.134 token=YmY0MD********************MGQ2OGI3MTEyODNiYjAyZGJjMA==
cp conf/agent.log.yaml.sample conf/agent.log.yaml
cargo build --release
cd ../target/release/
./zhiyan-agent -c /home/czh/workspace/github/ZhiYanModule/zhiyan-agent/conf/agent.conf -l /home/czh/workspace/github/ZhiYanModule/zhiyan-agent/conf/agent.log.yaml
13.5. zhian-server
cd /home/czh/workspace/github/ZhiYanModule
git clone ssh://git@git.cdgeekcamp.com:4295/zhiyan/zhiyan-server.git
cd zhiyan-server
git checkout dev
git pull origin dev
cp conf/server.conf.sample conf/server.conf
vim conf/server.conf
更改conf/server.conf文件中的以下内容
postgresql_username=zy postgresql_password=geek postgresql_host=localhost postgresql_port=5432 postgresql_database=zhiyan
cp conf/server.log.yaml.sample conf/server.log.yaml
cargo build --release
cd ../target/release/
./zhiyan-server -c /home/czh/workspace/github/ZhiYanModule/zhiyan-server/conf/server.conf -l /home/czh/workspace/github/ZhiYanModule/zhiyan-server/conf/server.log.yaml
14. zhiyan-mod
14.1. zhiyan-mod-cpu
15. Python
15.1. CentOS9 Install Python311
15.1.1. 编译安装
mkdir ~/downloads
dnf install -y gcc gcc-c++ make libffi-devel bzip2-devel readline-devel ncurses-devel tcl-devel tcl libuuid-devel zlib-devel zlib xz-devel xz tk-devel tk openssl-devel sqlite-devel
cd ~/downloads
wget --no-check-certificate https://www.python.org/ftp/python/3.11.5/Python-3.11.5.tar.xz
tar xf Python-3.11.5.tar.xz
cd Python-3.11.5
./configure --prefix=/usr/local/python-3.11.5 \
--enable-optimizations \
--with-ensurepip \
--enable-loadable-sqlite-extensions
make
make install
ln -s /usr/local/python-3.11.5 /usr/local/python3
ln -s /usr/local/python3/bin/pip3 /usr/local/bin/gpip
ln -s /usr/local/python3/bin/python3 /usr/local/bin/gpy
gpip install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple pip
gpip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
15.2. Python版本管理
该部分提供思路,不可直接使用 |
#查看当前电脑上有多少个版本的python
[root@192 workspace]# ls /usr/bin/python*
#建立软链接更改使用的python
[root@192 workspace]# ln -s /usr/bin/python3.9 python
#或者
[root@192 workspace]# ln -s /usr/bin/python311 python
更改pip安装库的位置
[root@192 workspace]# which pip
/usr/bin/pip
[root@192 workspace]# realpath /usr/local/bin/gpip
/usr/local/python-3.11.5/bin/pip3
#建立软链接更改使用的python
[root@192 workspace]# ln -s /usr/local/python-3.11.5/bin/pip3 /usr/bin/pip
15.3. pip
情况一:在linux上自带的python环境中无法使用pip,提示
[czh@minikube python_hello]$ pip bash: pip: command not found
python -m ensurepip python -m pip --version
安装flask测试一下 python -m pip install flask 成功后升级pip python -m pip install --upgrade pip
执行pip有使用说明书出现即安装成功 pip
16. Terminal
16.1. zsr
sudo pacman -S check
cd /home/czh/workspace/github
git clone git@github.com:fifilyu/zsr.git
cd zsr
cmake .
make
make install
cd bin
./zsr -c 5 --cpu
cpu=0.00,0.00,0.00; cpu=98.98,1.02,0.00; cpu=98.71,1.29,0.00; cpu=99.11,0.89,0.03;
添加到环境变量
echo 'export PATH=$PATH:~/workspace/github/zsr/bin' >> ~/.bashrc
source ~/.bashrc
在任何路径执行
zsr -c 5 --cpu
cpu=0.00,0.00,0.00; cpu=98.98,1.02,0.00; cpu=98.71,1.29,0.00; cpu=99.11,0.89,0.03;
17. MQTT
17.1. MQTTdemo
pip install paho-mqtt
该demo本人运行在centos stream 9
安装mosquitto(MQTT的broker)
yum install mosquitto
systemctl status mosquitto.service
cat <<EOF >>/etc/mosquitto/mosquitto.conf
allow_anonymous
listener 1883 0.0.0.0
EOF
systemctl restart mosquitto.service
在一个终端subscribe
mosquitto_sub -t 'test/topic' -v
在另一个终端publish
mosquitto_pub -t 'test/topic' -m 'hello world'
[root@master ~]# mosquitto_sub -t 'test/topic' -v test/topic hello world
出现以上内容则mosquitto搭建成功
subscribe.py
# python3.6 import random from paho.mqtt import client as mqtt_client #broker = 'broker.emqx.io' broker = '192.168.122.254' port = 1883 topic = "python/mqtt" # Generate a Client ID with the subscribe prefix. client_id = f'subscribe-{random.randint(0, 100)}' # username = 'emqx' # password = 'public' def connect_mqtt() -> mqtt_client: def on_connect(client, userdata, flags, rc): if rc == 0: print("Connected to MQTT Broker!") else: print("Failed to connect, return code %d\n", rc) client = mqtt_client.Client(client_id) # client.username_pw_set(username, password) client.on_connect = on_connect client.connect(broker, port) return client def subscribe(client: mqtt_client): def on_message(client, userdata, msg): print(f"Received `{msg.payload.decode()}` from `{msg.topic}` topic") client.subscribe(topic) client.on_message = on_message def run(): client = connect_mqtt() subscribe(client) client.loop_forever() if __name__ == '__main__': run()
publish.py
# python 3.6 import random import time from paho.mqtt import client as mqtt_client #broker = 'broker.emqx.io' broker = '192.168.122.254' port = 1883 topic = "python/mqtt" # Generate a Client ID with the publish prefix. client_id = f'publish-{random.randint(0, 1000)}' # username = 'emqx' # password = 'public' def connect_mqtt(): def on_connect(client, userdata, flags, rc): if rc == 0: print("Connected to MQTT Broker!") else: print("Failed to connect, return code %d\n", rc) client = mqtt_client.Client(client_id) # client.username_pw_set(username, password) client.on_connect = on_connect client.connect(broker, port) return client def publish(client): msg_count = 1 while True: time.sleep(1) msg = f"messages: {msg_count}" result = client.publish(topic, msg) # result: [0, 1] status = result[0] if status == 0: print(f"Send `{msg}` to topic `{topic}`") else: print(f"Failed to send message to topic {topic}") msg_count += 1 if msg_count > 5: break def run(): client = connect_mqtt() client.loop_start() publish(client) client.loop_stop() if __name__ == '__main__': run()
18. troubleshoot
18.1. troubleshoot
Q1:Failed to find catalog entry: Invalid argument
journalctl --update-catalog
Q2:MongoDB loads but breaks, returning status=14
rm -rf /tmp/mongodb-27017.sock
Q3:arch linux 没有声音
sudo pacman -S sof-firmware
sudo pacman -S alsa-ucm-conf
reboot
Q4:error: Refusing to undefine while domain managed save image exists
Q4:错误: 域管理的保存映像存在时拒绝取消定义
virsh managedsave-remove win7
Q:Failed to start OpenSSH Daemon
sshd -t
Q5:jdk-openjdk and jre-openjdk are in conflict
sudo pacman -Sy jre-openjdk
Q6:vmware 看不到共享文件夹
vmhgfs-fuse /mnt/hgfs
Q7:Virtual machine reports a "BUG: soft lockup" (or multiple at the same time) .输出如下报错
BUG: soft lockup - CPU#6 stuck for 73s! [flush-253:0:1207] BUG: soft lockup - CPU#7 stuck for 74s! [processname:15706] BUG: soft lockup - CPU#5 stuck for 63s! [processname:25582] BUG: soft lockup - CPU#0 stuck for 64s! [proceessname:15789] --or-- <time> <hostname> kernel: NMI watchdog: BUG: soft lockup - CPU#6 stuck for 25s! [ksoftirqd/6:38] <time> <hostname> kernel: NMI watchdog: BUG: soft lockup - CPU#7 stuck for 22s! [ksoftirqd/7:43] <time> <hostname> kernel: NMI watchdog: BUG: soft lockup - CPU#7 stuck for 24s! [NetworkManager:945] <time> <hostname> kernel: NMI watchdog: BUG: soft lockup - CPU#7 stuck for 22s! [watchdog/7:41]
A7: Reference: https://access.redhat.com/solutions/1503333
1.设置kernel.softlockup_panic变量为0
sysctl kernel.softlockup_panic
echo "kernel.softlockup_panic=0" >> /etc/sysctl.conf
sysctl -p
Since RHEL7, this parameter should be set to 0 by default in virtual machines. |
echo kernel.watchdog_thresh=30 >> /etc/sysctl.conf
sysctl -p
根本原因: 资源不够用,八成是后台开了很多程序,如果可以的话,重启一下电脑吧!关于虚拟机的软锁定,也就是soft lockup , https://access.redhat.com/articles/5008811 redhat的这篇文章写清楚了原因——“通常,每个 vCPU 都由主机上的一个进程(或线程)表示,由于vCPU只是一个进程,所以会被调度,使它让出cpu时间给其他程序使用,这个时候vCPU运行的所有程序都停了。但请注意,从虚拟机的角度来看,vCPU 正在不间断地运行,即它不知道它已被暂停(重新调度)。”——如果你查看不了这篇文章的话,尝试一下 https://developers.redhat.com/blog/2021/02/10/how-to-activate-your-no-cost-red-hat-enterprise-linux-subscription#
19. mysql主从
mysql
mysql>SET GLOBAL server_id = 1;
mysql>SET GLOBAL validate_password.policy = 0;
mysql> CREATE USER 'repl'@'%' IDENTIFIED BY '@#$Rfg345634523rft4fa';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%';
mysql> FLUSH TABLES WITH READ LOCK;
执行 FLUSH TABLES WITH READ LOCK 之后不要退出客户端。再开另一个终端到主:
|
mysql
mysql> SHOW MASTER STATUS\G
*************************** 1. row *************************** File: mysql-bin.000002 Position: 690 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.00 sec)
注意记录输出信息中的File和Position |
echo 'server-id=2' >> /etc/my.cnf
mysql
mysql>SET GLOBAL server_id = 2;
mysql> CHANGE REPLICATION SOURCE TO
SOURCE_HOST='192.168.122.254',
SOURCE_USER='repl',
SOURCE_PASSWORD='@#$Rfg345634523rft4fa',
SOURCE_LOG_FILE='mysql-bin.000007',
SOURCE_LOG_POS=597;
回到
mysql> UNLOCK TABLES;
在从mysql中查看主从同步状态:
mysql> show slave status \G;
输出
*************************** 1. row *************************** Slave_IO_State: Master_Host: master Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000002 Read_Master_Log_Pos: 690 Relay_Log_File: slave-relay-bin.000001 Relay_Log_Pos: 4 Relay_Master_Log_File: mysql-bin.000002 Slave_IO_Running: No Slave_SQL_Running: No Replicate_Do_DB: ... ... Master_public_key_path: Get_master_public_key: 0 Network_Namespace: 1 row in set, 1 warning (0.00 sec) ERROR: No query specified
此时的SlaveIORunning 和 SlaveSQLRunning 都是No,因为我们还没有开启主从复制过程。 开启主从复制:
start slave;
再次查看同步状态:
show slave status \G;
SlaveIORunning 和 SlaveSQLRunning 都是Yes说明主从复制已经开启。
20. pypi
20.1. 打包成可执行文件
mongo-py ├── common.py ├── file.json ├── main.py ├── mongo_tool │ ├── delete.py │ ├── insert.py │ ├── query.py │ └── update.py ├── pyproject.toml ├── README.md
[tool.poetry] name = "czh-mongo-py" version = "0.0.1.0" description = "" authors = ["xiangyouzhuan <xiangyouzhuan2018@gmail.com>"] readme = "README.md" packages = [ { include = "common.py" }, #包含所需的文件 { include = "main.py" }, { include = "mongo_tool" } ] [tool.poetry.dependencies] python = ">=3.8.5 <4.0.0" [tool.poetry.scripts] czhmongopy = "main:main" #此处‘czhmongopy’是使用软件时的命令,等同于python main.py #"main:main"指main.py下的main()函数 [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api"
cd mongo-py
poetry build -f wheel
本地安装测试
pip install -U --user dist/xxxxx.whl
列出 Python 包文件列表:
pip show czh-mongo-py -f
获取命令行工具执行路径:
python -c "import site; print('%s/bin/czhmongopy' % site.USER_BASE)"
~/.local/bin/czhmongopy
运行
~/.local/bin/czhmongopy
命令行参数错误,请查看使用说明 usage: mongo_tool [-i file] [-d filter] [-u filter json] [-q filter] mongo工具 options: -h, --help show this help message and exit -i file, --insert file 将js文件内容写入数据库 -u filter json, --update filter json 更改满足filter的数据 -q filter, --query filter 查询满足filter的数据 -d filter, --delete filter 删除所有满足条件的document -v, --version 显示版本信息
20.2. 上传到pypi
twine upload --repository testpypi dist/*
twine upload dist/*
-
username输入
__token__
-
password输入token(token需要在pypi生成)
Uploading distributions to https://test.pypi.org/legacy/ Enter your username: __token__ Uploading czh-mongo-py-0.0.1-py3-none-any.whl 100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.2/8.2 kB • 00:01 • ? Uploading czh-mongo-py-0.0.1.tar.gz 100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.8/6.8 kB • 00:00 • ?
测试从testpypi安装czh-mongo-py
pip install --index-url https://test.pypi.org/simple/ --no-deps czh-mongo-py
设置连接数据库的环境变量
export mongoship=192.168.122.52
export mongoshport=27017
czhmongpy
命令行参数错误,请查看使用说明 usage: mongo_tool [-i file] [-d filter] [-u filter json] [-q filter] mongo工具 options: -h, --help show this help message and exit -i file, --insert file 将js文件内容写入数据库 -u filter json, --update filter json 更改满足filter的数据 -q filter, --query filter 查询满足filter的数据 -d filter, --delete filter 删除所有满足条件的document -v, --version 显示版本信息
21. Kafka
21.1. 安装配置
21.1.1. Linux
安装
mkdir ~/downloads
cd ~/downloads
rm -rf kafka_2.12-2.3.1 kafka_2.12-2.3.1.tgz /usr/local/kafka_2.12-2.3.1
wget -c http://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.3.1/kafka_2.12-2.3.1.tgz
tar xf kafka_2.12-2.3.1.tgz
mv kafka_2.12-2.3.1 /usr/local/kafka_2.12-2.3.1
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
Topic
bin/kafka-topics.sh --create --bootstrap-server 192.168.2.2:9092 --replication-factor 1 --partitions 1 --topic test
bin/kafka-topics.sh --list --bootstrap-server 192.168.2.2:9092
测试
# bin/kafka-console-producer.sh --broker-list 192.168.2.2:9092 --topic test This is a message This is another message
# bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning This is a message This is another message
21.1.2. Windows
bin\windows\zookeeper-server-start.bat config\zookeeper.properties
bin\windows\kafka-server-start.bat config\server.properties
bin\windows\kafka-topics.bat --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test
bin\windows\kafka-topics.bat --list --bootstrap-server localhost:9092
bin\windows\kafka-console-producer.bat --broker-list localhost:9092 --topic test
bin\windows\kafka-console-consumer.bat --bootstrap-server 192.168.2.2:9092 --topic test --from-beginning
21.1.3. 配置
开启对外端口监听,编辑文件 config/server.properties
修改以下参数:
advertised.listeners=PLAINTEXT://0.0.0.0:9092
22. RabbitMQ
22.1. RabbitMQ安装
安装RabbitMQ和Cloudsmith签名密钥
## primary RabbitMQ signing key
rpm --import 'https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq-release-signing-key.asc'
## modern Erlang repository
rpm --import 'https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-erlang.E495BB49CC4BBE5B.key'
## RabbitMQ server repository
rpm --import 'https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-server.9F4587F226208342.key'
将以下内容写入文件 /etc/yum.repos.d/rabbitmq.repo
中
cat <<EOF >/etc/yum.repos.d/rabbitmq.repo # In /etc/yum.repos.d/rabbitmq.repo ## ## Zero dependency Erlang RPM ## [modern-erlang] name=modern-erlang-el9 # uses a Cloudsmith mirror @ yum.novemberain.com. # Unlike Cloudsmith, it does not have any traffic quotas baseurl=https://yum1.novemberain.com/erlang/el/9/$basearch https://yum2.novemberain.com/erlang/el/9/$basearch https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-erlang/rpm/el/9/$basearch repo_gpgcheck=1 enabled=1 gpgkey=https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-erlang.E495BB49CC4BBE5B.key gpgcheck=1 sslverify=1 sslcacert=/etc/pki/tls/certs/ca-bundle.crt metadata_expire=300 pkg_gpgcheck=1 autorefresh=1 type=rpm-md [modern-erlang-noarch] name=modern-erlang-el9-noarch # uses a Cloudsmith mirror @ yum.novemberain.com. # Unlike Cloudsmith, it does not have any traffic quotas baseurl=https://yum1.novemberain.com/erlang/el/9/noarch https://yum2.novemberain.com/erlang/el/9/noarch https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-erlang/rpm/el/9/noarch repo_gpgcheck=1 enabled=1 gpgkey=https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-erlang.E495BB49CC4BBE5B.key https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq-release-signing-key.asc gpgcheck=1 sslverify=1 sslcacert=/etc/pki/tls/certs/ca-bundle.crt metadata_expire=300 pkg_gpgcheck=1 autorefresh=1 type=rpm-md [modern-erlang-source] name=modern-erlang-el9-source # uses a Cloudsmith mirror @ yum.novemberain.com. # Unlike Cloudsmith, it does not have any traffic quotas baseurl=https://yum1.novemberain.com/erlang/el/9/SRPMS https://yum2.novemberain.com/erlang/el/9/SRPMS https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-erlang/rpm/el/9/SRPMS repo_gpgcheck=1 enabled=1 gpgkey=https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-erlang.E495BB49CC4BBE5B.key https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq-release-signing-key.asc gpgcheck=1 sslverify=1 sslcacert=/etc/pki/tls/certs/ca-bundle.crt metadata_expire=300 pkg_gpgcheck=1 autorefresh=1 ## ## RabbitMQ Server ## [rabbitmq-el9] name=rabbitmq-el9 baseurl=https://yum2.novemberain.com/rabbitmq/el/9/$basearch https://yum1.novemberain.com/rabbitmq/el/9/$basearch https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/rpm/el/9/$basearch repo_gpgcheck=1 enabled=1 # Cloudsmith's repository key and RabbitMQ package signing key gpgkey=https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-server.9F4587F226208342.key https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq-release-signing-key.asc gpgcheck=1 sslverify=1 sslcacert=/etc/pki/tls/certs/ca-bundle.crt metadata_expire=300 pkg_gpgcheck=1 autorefresh=1 type=rpm-md [rabbitmq-el9-noarch] name=rabbitmq-el9-noarch baseurl=https://yum2.novemberain.com/rabbitmq/el/9/noarch https://yum1.novemberain.com/rabbitmq/el/9/noarch https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/rpm/el/9/noarch repo_gpgcheck=1 enabled=1 # Cloudsmith's repository key and RabbitMQ package signing key gpgkey=https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-server.9F4587F226208342.key https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq-release-signing-key.asc gpgcheck=1 sslverify=1 sslcacert=/etc/pki/tls/certs/ca-bundle.crt metadata_expire=300 pkg_gpgcheck=1 autorefresh=1 type=rpm-md [rabbitmq-el9-source] name=rabbitmq-el9-source baseurl=https://yum2.novemberain.com/rabbitmq/el/9/SRPMS https://yum1.novemberain.com/rabbitmq/el/9/SRPMS https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/rpm/el/9/SRPMS repo_gpgcheck=1 enabled=1 gpgkey=https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-server.9F4587F226208342.key gpgcheck=0 sslverify=1 sslcacert=/etc/pki/tls/certs/ca-bundle.crt metadata_expire=300 pkg_gpgcheck=1 autorefresh=1 type=rpm-md EOF
dnf update -y
dnf install -y socat logrotate
dnf install -y erlang rabbitmq-server
systemctl enable rabbitmq-server
systemctl start rabbitmq-server
systemctl status rabbitmq-server
22.2. RabbitMQ 配置
#添加用户并设置密码
rabbitmqctl add_user 'myuser' '2a55f70a841f18b97c3a7db939b7adc9e34a0f1b'
#列出所有的用户
rabbitmqctl list_users
#添加虚拟主机qa1
rabbitmqctl add_vhost qa1
#向用户授权
#第一个 .* 表示对于每个实体,都授予配置权限。
#第二个 .* 表示对于每个实体,都授予写入权限。
#第三个 .* 表示对于每个实体,都授予读取权限。
rabbitmqctl set_permissions -p "qa1" "myuser" ".*" ".*" ".*"
#设置系统最大文件句柄
echo "* soft nofile 65535" >> /etc/security/limits.conf
echo "* hard nofile 65535" >> /etc/security/limits.conf
#重启生效
reboot
#查看文件句柄最大值
ulimit -n
#设置RabbitMQ最大文件句柄
sed -i "s|# LimitNOFILE=65536|LimitNOFILE=65536|" /usr/lib/systemd/system/rabbitmq-server.service
sed -i "s|LimitNOFILE=32768|#LimitNOFILE=32768|" /usr/lib/systemd/system/rabbitmq-server.service
#重启服务
systemctl daemon-reload
systemctl restart rabbitmq-server
22.3. RabbiMQ使用
22.3.1. RabbitMQ cli
rabbitmq-plugins enable rabbitmq_management
RABBITMQ_ADMIN=`find / -name rabbitmqadmin`
cp $RABBITMQ_ADMIN /usr/bin/
chmod +x /usr/bin/rabbitmqadmin
#定义一个队列queue,durable=true代表持久化打开。
rabbitmqadmin declare queue name=test durable=true
#定义一个Topic路由
rabbitmqadmin declare exchange name=my.topic type=topic
#发布一条消息
rabbitmqadmin publish routing_key=test payload="hello world"
#使用路由转发消息
rabbitmqadmin publish routing_key=my.test exchange=my.topic payload="hello world"
#查看消息
rabbitmqadmin get queue=test
22.3.2. python
pip3 install pika
cat << EOF > receive.py
import pika
credentials = pika.PlainCredentials('myuser','2a55f70a841f18b97c3a7db939b7adc9e34a0f1b')
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost',5672,'qa1',credentials))
channel = connection.channel()
channel.queue_declare(queue='balance')
def callback(ch, method, properties, body):
print(" [x] Received %r" % body)
channel.basic_consume(queue='balance',
auto_ack=False,
on_message_callback=callback)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
EOF
cat << EOF > send.py
#!/usr/bin/env python
import pika
auth = pika.PlainCredentials("myuser","2a55f70a841f18b97c3a7db939b7adc9e34a0f1b")
connect = pika.BlockingConnection(pika.ConnectionParameters("localhost", port=5672, virtual_host='qa1', credentials=auth))
channel = connect.channel()
channel.queue_declare(queue='balance')
channel.basic_publish(exchange='',
routing_key='balance',
body='Hello World!')
print(" [x] Sent 'Hello World!'")
connect.close()
EOF
python3 receive.py
[*] Waiting for messages. To exit press CTRL+C
打开另一个终端
python3 send.py
[x] Sent 'Hello World!'
回到执行receive.py的终端,可看到
[*] Waiting for messages. To exit press CTRL+C [x] Received b'Hello World!'
23. yum换源
23.1. yum 换源
-
Centos7
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo yum clean all yum makecache yum update
-
Centos-stream9
mv /etc/yum.repos.d/centos.repo /etc/yum.repos.d/centos.repo.backup touch /etc/yum.repos.d/centos.repo
将以下内容写入centos.repo中
# CentOS-Base.repo # # The mirror system uses the connecting IP address of the client and the # update status of each mirror to pick mirrors that are updated to and # geographically close to the client. You should use this for CentOS updates # unless you are manually picking other mirrors. # # If the mirrorlist= does not work for you, as a fall back you can try the # remarked out baseurl= line instead. # # [base] name=CentOS-$releasever - Base - mirrors.aliyun.com #failovermethod=priority baseurl=https://mirrors.aliyun.com/centos-stream/$stream/BaseOS/$basearch/os/ http://mirrors.aliyuncs.com/centos-stream/$stream/BaseOS/$basearch/os/ http://mirrors.cloud.aliyuncs.com/centos-stream/$stream/BaseOS/$basearch/os/ gpgcheck=1 gpgkey=https://mirrors.aliyun.com/centos-stream/RPM-GPG-KEY-CentOS-Official #additional packages that may be useful #[extras] #name=CentOS-$releasever - Extras - mirrors.aliyun.com #failovermethod=priority #baseurl=https://mirrors.aliyun.com/centos-stream/$stream/extras/$basearch/os/ # http://mirrors.aliyuncs.com/centos-stream/$stream/extras/$basearch/os/ # http://mirrors.cloud.aliyuncs.com/centos-stream/$stream/extras/$basearch/os/ #gpgcheck=1 #gpgkey=https://mirrors.aliyun.com/centos-stream/RPM-GPG-KEY-CentOS-Official #additional packages that extend functionality of existing packages [centosplus] name=CentOS-$releasever - Plus - mirrors.aliyun.com #failovermethod=priority baseurl=https://mirrors.aliyun.com/centos-stream/$stream/centosplus/$basearch/os/ http://mirrors.aliyuncs.com/centos-stream/$stream/centosplus/$basearch/os/ http://mirrors.cloud.aliyuncs.com/centos-stream/$stream/centosplus/$basearch/os/ gpgcheck=1 enabled=0 gpgkey=https://mirrors.aliyun.com/centos-stream/RPM-GPG-KEY-CentOS-Official [PowerTools] name=CentOS-$releasever - PowerTools - mirrors.aliyun.com #failovermethod=priority baseurl=https://mirrors.aliyun.com/centos-stream/$stream/PowerTools/$basearch/os/ http://mirrors.aliyuncs.com/centos-stream/$stream/PowerTools/$basearch/os/ http://mirrors.cloud.aliyuncs.com/centos-stream/$stream/PowerTools/$basearch/os/ gpgcheck=1 enabled=0 gpgkey=https://mirrors.aliyun.com/centos-stream/RPM-GPG-KEY-CentOS-Official [AppStream] name=CentOS-$releasever - AppStream - mirrors.aliyun.com #failovermethod=priority baseurl=https://mirrors.aliyun.com/centos-stream/$stream/AppStream/$basearch/os/ http://mirrors.aliyuncs.com/centos-stream/$stream/AppStream/$basearch/os/ http://mirrors.cloud.aliyuncs.com/centos-stream/$stream/AppStream/$basearch/os/ gpgcheck=1 gpgkey=https://mirrors.aliyun.com/centos-stream/RPM-GPG-KEY-CentOS-Official
更新缓存
yum makecache && yum update
23.2. yum常用命令
#通过命令查找该命令由什么软件提供
yum whatprovides 命令
24. Proxychains
24.1. 安装配置
yum install -y epel-release
yum install -y proxychains-ng.x86_64
sed -E -i 's/socks4\s+127.0.0.1 9050/socks5 192.168.2.8 1080/' /etc/proxychains.conf
sed -i 's/#quiet_mode/quiet_mode/g' /etc/proxychains.conf
echo 'alias p="/usr/bin/proxychains4"' >> ~/.bashrc
# 不重启终端生效环境配置
source ~/.bashrc
# 测试访问
p curl -I www.youtube.com
25. Drupal环境搭建
25.1. 配置内网环境一:安装Nginx
yum install -y nginx
修改nginx配置文件
cat << EOF > /etc/nginx/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_rlimit_nofile 65535;
events {
worker_connections 65535;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '\$host \$server_port \$remote_addr - \$remote_user [\$time_local] "\$request" '
'\$status \$request_time \$body_bytes_sent "\$http_referer" '
'"\$http_user_agent" "\$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_names_hash_bucket_size 128;
server_name_in_redirect off;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_header_timeout 3m;
client_body_timeout 3m;
client_max_body_size 50m;
client_body_buffer_size 256k;
send_timeout 3m;
gzip on;
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types image/svg+xml application/x-font-wof text/plain text/xml text/css application/xml application/xhtml+xml application/rss+xml application/javascript application/x-javascript text/javascript;
gzip_vary on;
proxy_redirect off;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header REMOTE-HOST \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_connect_timeout 60;
proxy_send_timeout 60;
proxy_read_timeout 60;
proxy_buffer_size 256k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
proxy_max_temp_file_size 128m;
#让代理服务端不要主动关闭客户端的连接,协助处理499返回代码问题
proxy_ignore_client_abort on;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
index index.html index.htm index.php default.html default.htm default.php;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
}
EOF
mkdir -p /etc/nginx/conf.d
test -f /etc/nginx/conf.d/default.conf && (test -f /etc/nginx/conf.d/default.conf.init || cp /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.init)
cat << EOF > /etc/nginx/conf.d/default.conf
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
EOF
nginx -t && (test -s /var/run/nginx.pid || rm -f /var/run/nginx.pid)
启动Nginx
开机启动
systemctl enable nginx
systemctl start nginx
systemctl status nginx
● nginx.service - nginx - high performance web server Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled) Active: active (running) since Thu 2023-01-19 00:00:23 CST; 3s ago Docs: http://nginx.org/en/docs/ Process: 9617 ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf (code=exited, status=0/SUCCESS) Main PID: 9618 (nginx) CGroup: /system.slice/nginx.service ├─9618 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf ├─9619 nginx: worker process ├─9620 nginx: worker process ├─9621 nginx: worker process └─9622 nginx: worker process Jan 19 00:00:23 lan_server systemd[1]: Starting nginx - high performance web server... Jan 19 00:00:23 lan_server systemd[1]: Started nginx - high performance web server.
25.2. 配置内网环境二:安装PHP
yum install -y php php-bcmath php-fpm php-gd php-intl php-mbstring php-mysqlnd php-opcache php-pdo php-pecl-apcu php-devel
当完成环境配置后,可以通过以下验证 LNMP 环境是否搭建成功 执行以下命令,创建测试文件。
echo "<?php phpinfo(); ?>" >> /usr/share/nginx/html/index.php
#重启 Nginx 服务
systemctl restart nginx
在本地浏览器中访问如下地址,查看环境配置是否成功。
显示结果如下,则说明环境配置成功。

25.3. 配置内网环境三:LNMP环境测试
cat << EOF > /etc/nginx/conf.d/test.drupal.com.conf
server {
listen 80;
server_name test.drupal.com;
root /data/web/test.drupal.com;
error_log /var/log/nginx/test.drupal.com_error.log;
access_log /var/log/nginx/test.drupal.com_access.log main;
location / {
try_files \$uri /index.php\$is_args\$query_string;
}
location ~ \.php\$ {
try_files \$uri \$uri/ 404;
fastcgi_pass unix:/run/php-fpm/www.sock;
fastcgi_param SCRIPT_FILENAME \$document_root\$fastcgi_script_name;
include fastcgi_params;
}
}
EOF
nginx -t && nginx -s reload
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
mkdir -p /data/web/test.drupal.com
# 使用 -O 参数指定保存文件名,会强制覆盖已经存在的文件
wget https://ftp.drupal.org/files/projects/drupal-9.5.11.tar.gz -O drupal-9.5.11.tar.gz
tar xf drupal-9.5.11.tar.gz
mv drupal-9.5.11/* /data/web/test.drupal.com
rm -rf drupal-9.5.11
chown -R apache:nginx /data/web/test.drupal.com
chmod -R 755 /data/web/test.drupal.com
echo '127.0.0.1 test.drupal.com' >> /etc/hosts
最后,访问 http://test.drupal.com 完成安装。
26. hadoop
26.1. 环境准备
-
centos-stream-9
-
java11
-
hadoop-3.3.6
准备三台服务器bigdata1 bigdata2 bigdata3
systemctl stop firewalld.service
systemctl disable firewalld.service
cat << EOF >> /etc/hosts
192.168.122.25 bigdata1
192.168.122.146 bigdata2
192.168.122.219 bigdata3
EOF
yum update -y
yum install -y java-11-openjdk java-11-openjdk-devel
yum install -y rsync
cat << EOF >> ~/.bashrc
export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-11.0.18.0.10-3.el9.x86_64
export PATH=$PATH:$JAVA_HOME/bin
export HADOOP_HOME=/opt/module/hadoop-3.3.6
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
EOF
source ~/.bashrc
useradd hadoop
passwd hadoop
mkdir /opt/module
chown -R hadoop:hadoop /opt/module
26.2. hadoop安装配置
su hadoop
ssh-keygen -t rsa
ssh-copy-id bigdata1
ssh-copy-id bigdata2
ssh-copy-id bigdata3
cd /opt/module
wget https://dlcdn.apache.org/hadoop/common/hadoop-3.3.6/hadoop-3.3.6.tar.gz
tar xf hadoop-3.3.6.tar.gz
hadoop version
hadoop 的安装完成,到这里,单台机器的软件已经安装完成,接下来使用rsync将安装好的软件拷贝到bigdata2和bigdata3
rsync -rvl /opt/module/ hadoop@bigdata2:/opt/module
rsync -rvl /opt/module/ hadoop@bigdata3:/opt/module
至此,所有软件安装完成
基础环境配置完成后,需要对集群中服务器做规划,让每台服务器承担不同的角色。 具体规划如下,最重要的 NameNode 放在第一台服务器上,yarn 的 ResourceManager 放在 第二台服务器上,SecondaryNamenode 则放在第三台服务器上。 具体规划如下:
用途 |
bigdata1 |
bigdata2 |
bigdata3 |
HDFS |
NameNode DataNode |
DataNode |
SecondaryNameNode DataNode |
YARN |
NodeManager |
ResourceManager NodeManager |
NodeManager |
接下来编辑配置文件
cd /opt/module/hadoop-3.3.6/etc/hadoop
vi core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://bigdata1:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/module/hadoop-3.3.6/data/tmp</value>
</property>
vi hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-11.0.18.0.10-3.el9.x86_64/
vi hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>bigdata3:50090</value>
</property>
vi yarn-env.sh
export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-11.0.18.0.10-3.el9.x86_64/
vi yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>bigdata2</value>
</property>
vi mapred-env.sh
export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-11.0.18.0.10-3.el9.x86_64/
vi mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
vi workers
bigdata1
bigdata2
bigdata3
然后将配置同步到另外两台机器:
rsync -rvl /opt/module/hadoop-3.3.6/ hadoop@bigdata2:/opt/module/hadoop-3.3.6
rsync -rvl /opt/module/hadoop-3.3.6/ hadoop@bigdata3:/opt/module/hadoop-3.3.6
26.3. 集群操作
接下来启动集群,首次启动集群需要对 NameNode 进行格式化:
#对NameNode 进行格式化
cd /opt/module/hadoop-3.3.6
bin/hdfsnamenode-format
#启动 hdfs
sbin/start-dfs.sh
#启动 yarn
sbin/start-yarn.sh
访问一下网址需要添加本地域名解析
27. WordPress
27.1. 配置内网环境一:安装Nginx
27.2. 配置内网环境二:安装PHP
27.3. 配置内网环境三:安装mariadb
#安装mariadb
yum install -y mariadb mariadb-server
#开启mariadb服务并设置开机自启动
systemctl start mariadb
systemctl enable mariadb
#查看服务状态
systemctl status mariadb
#进入 MariaDB
mysql
#创建数据库与用户
CREATE DATABASE wordpress;
CREATE USER 'user'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON wordpress.* TO 'user'@'localhost';
ALTER USER root@localhost IDENTIFIED VIA mysql_native_password USING PASSWORD('输入您的密码');
FLUSH PRIVILEGES;
#退出数据库
\q
27.4. 配置内网环境四:安装WordPress
cd /usr/share/nginx/html
wget https://cn.wordpress.org/wordpress-6.3.2-zh_CN.tar.gz
tar xf wordpress-6.3.2-zh_CN.tar.gz
cd /usr/share/nginx/html/wordpress
cp wp-config-sample.php wp-config.php
vim wp-config.php
找到文件中 MySQL 的部分,并将相关配置信息修改为 安装mariadb 时配置部分 的内容
// ** MySQL settings - You can get this info from your web host ** // /** The name of the database for WordPress */ define('DB_NAME', 'wordpress'); /** MySQL database username */ define('DB_USER', 'user'); /** MySQL database password */ define('DB_PASSWORD', '123456'); /** MySQL hostname */ define('DB_HOST', 'localhost');
在浏览器地址栏输入http://域名或云服务器实例的公网 IP/wordpress
例如 http://192.xxx.xxx.xx/wordpress
转至 WordPress 安装页,开始配置 WordPress。

28. ElasticSearch
-
环境准备
-
操作系统:centos-stream9
-
cpu:两核
-
内存:4G
-
Elasticsearch版本:8.10
-
28.1. ElasticSearch安装
#下载并安装公共签名密钥:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
#添加仓库
cat << EOF >/etc/yum.repos.d/elasticsearch.repo
[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
EOF
#安装elasticsearch
sudo yum install -y --enablerepo=elasticsearch elasticsearch
安装时会自动创建一个超级用户elastic,并在终端输出它的密码,记住该密码,将用于后续kibana前端登陆和filebeat的配置 |
#例:假设密码是vjT4*xK-Q_o__oXMWRY9则执行
export ELASTIC_PASSWORD="vjT4*xK-Q_o__oXMWRY9"
vim /etc/elasticsearch/elasticsearch.yml
#将相关内容更改为以下值
xpack.security.enabled: false
xpack.security.enrollment.enabled: false
xpack.security.http.ssl:
enabled: false
xpack.security.transport.ssl:
enabled: false
#重新加载systemd管理配置
sudo /bin/systemctl daemon-reload
#设置开机自启动
sudo /bin/systemctl enable elasticsearch.service
#开启服务
sudo systemctl start elasticsearch.service
#查看服务状态
sudo systemctl status elasticsearch.service
curl -u elastic:$ELASTIC_PASSWORD http://localhost:9200
{ "name" : "Cp8oag6", "cluster_name" : "elasticsearch", "cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA", "version" : { "number" : "8.10.4", "build_type" : "tar", "build_hash" : "f27399d", "build_flavor" : "default", "build_date" : "2016-03-30T09:51:41.449Z", "build_snapshot" : false, "lucene_version" : "9.7.0", "minimum_wire_compatibility_version" : "1.2.3", "minimum_index_compatibility_version" : "1.2.3" }, "tagline" : "You Know, for Search" }
28.2. Kibana安装
如果Kibana与ElasticSearch装在同一台主机上,则主机内存要求4G以上 |
#添加kibana仓库
cat << EOF >/etc/yum.repos.d/kibana.repo
[kibana-8.x]
name=Kibana repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
#安装kibana
sudo yum install -y kibana
vim /etc/kibana/kibana.yml
#更改对应配置为以下值
server.port: 5601
server.host: "0.0.0.0"
#启动kibana
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana
sudo systemctl start kibana
curl -I http://localhost:5601
HTTP/1.1 200 OK x-content-type-options: nosniff referrer-policy: no-referrer-when-downgrade permissions-policy: camera=(), display-capture=(), fullscreen=(self), geolocation=(), microphone=(), web-share=() cross-origin-opener-policy: same-origin content-security-policy: script-src 'self'; worker-src blob: 'self'; style-src 'unsafe-inline' 'self' kbn-name: es3 content-type: text/html; charset=utf-8 cache-control: private, no-cache, no-store, must-revalidate content-length: 90867 vary: accept-encoding Date: Wed, 01 Nov 2023 12:04:55 GMT Connection: keep-alive Keep-Alive: timeout=120
至此,kibana已经正常启动

28.3. filebeat安装
将filebeat安装在需要管理的服务器上。本例中将filebeat安装在与ElasticSearch相同的主机上 |
mkdir downloads
cd downloads/
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.10.4-x86_64.rpm
sudo rpm -vi filebeat-8.10.4-x86_64.rpm
vim /etc/filebeat/filebeat.yml
设置Filebeat可以找到Elasticsearch安装的主机和端口,并设置有权安装Filebeat的用户名和密码(密码则为ElaxticSearch安装 时生成的默认密码)
#找到以下配置更改并更改对应的值
output.elasticsearch:
hosts: ["http://安装ElasticSearch主机的ip:9200"]
username: "elastic"
password: "vjT4*xK-Q_o__oXMWRY9"
#列出filebeat可用模块
filebeat modules list
#启用nginx模块配置
filebeat modules enable nginx
#更改nginx模块配置
#var.paths为nginx的access.log路径
vim /etc/filebeat/modules.d/nginx.yml
- module: nginx
access:
enabled: true
var.paths: ["/var/log/nginx/access.log*"]
#Filebeat 提供了预定义的数据解析、索引和可视化资源。
#加载这些资源
filebeat setup -e
#启动filebeat
systemctl enable filebeat
systemctl start filebeat
systemctl status filebeat
在侧边导航区单击“Discover”。为了查看Filebeat数据,请确保选择预定义的Filebeat -*索引模式,此时还没有数据

yum install -y nginx
systemctl enable nginx
systemctl start nginx
systemctl status nginx
#用于请求nginx默认界面的脚本
cat << EOF >filebeat.sh
#!/bin/bash
while true;
do
curl -I http://nginx主机的ip:80
sleep 1
done
EOF
sh filebeat.sh
等待1分钟刷新界面即可看到

如果在Kibana中看不到数据,请尝试将time filter更改为更大的范围。默认情况下,Kibana显示最近15分钟。 |
29. minikube
29.1. minikube安装和使用
-
环境准备
-
2 CPUs
-
4GB 内存
-
20GB 可用磁盘空间
-
1.安装minikube
mkdir /root/downloads
cd /root/downloads
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
sudo rpm -Uvh minikube-latest.x86_64.rpm
2.安装docker
参考本文Docker
3.添加用户,并将新用户加入docker用户组
adduser czh
passwd czh
echo 'czh ALL=(ALL) ALL' >> /etc/sudoers
su czh
sudo usermod -aG docker $USER && newgrp docker
4.设置环境变量以使用代理
你只需要变更HTTP_PROXY、HTTPS_PROXY以及http_proxy、https_proxy环境变量即可,no_proxy不用改。代理服务器以你的代理服务器地址为准。
export HTTP_PROXY=http://10.88.33.166:6666
export HTTPS_PROXY=http://10.88.33.166:6668
export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
export http_proxy=http://10.88.33.166:6666
export https_proxy=http://10.88.33.166:6668
export no_proxy=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
5.使用docker driver启动集群:
minikube start --driver=docker
[czh@K8S-1 root]$ minikube start --driver=docker 😄 minikube v1.31.2 on Centos 9 (kvm/amd64) ✨ Using the docker driver based on user configuration 📌 Using Docker driver with root privileges 👍 Starting control plane node minikube in cluster minikube 🚜 Pulling base image ... 💾 Downloading Kubernetes v1.27.4 preload ... > preloaded-images-k8s-v18-v1...: 393.21 MiB / 393.21 MiB 100.00% 25.86 M > index.docker.io/kicbase/sta...: 447.62 MiB / 447.62 MiB 100.00% 3.98 Mi ❗ minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.40, but successfully downloaded docker.io/kicbase/stable:v0.0.40 as a fallback image 🔥 Creating docker container (CPUs=2, Memory=2200MB) ... 🐳 Preparing Kubernetes v1.27.4 on Docker 24.0.4 ... ▪ Generating certificates and keys ... ▪ Booting up control plane ... ▪ Configuring RBAC rules ... 🔗 Configuring bridge CNI (Container Networking Interface) ... 🔎 Verifying Kubernetes components... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: storage-provisioner, default-storageclass 💡 kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A' 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
设置docker为默认driver:
minikube config set driver docker
6.配置minikube内的docker daemon使用代理
(1)进入minikube容器
minikube ssh
(2)编辑/etc/docker/daemon.json,添加以下代理配置
"proxies":{
"http-proxy": "http://10.88.33.166:6666",
"https-proxy": "http://10.88.33.166:6668",
"no-proxy": "localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24"}
(3)重新启动docker服务
sudo systemctl restart docker
(4)退出minikube容器
ctrl+d
7.kubectl安装
#添加kubectl仓库
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
EOF
#安装kubectl
sudo yum install -y kubectl
#拉取镜像,并将镜像load进minikube
#这一步针对国内不能正常下载k8s镜像的场景,如果网络可以正常访问dockerhub或者配置了代理,可以跳过这一步
docker pull kicbase/echo-server:1.0
minikube image load kicbase/echo-server:1.0
8.创建一个pod (1)查看当前集群状态
kubectl cluster-info
[czh@localhost ~]$ kubectl cluster-info Kubernetes control plane is running at https://192.168.49.2:8443 CoreDNS is running at https://192.168.49.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
(2)查看当前集群中的pods
kubectl get pods
(3)创建一个sample,并开放8080端口:
kubectl create deployment hello-minikube --image=kicbase/echo-server:1.0
kubectl expose deployment hello-minikube --type=NodePort --port=8080
(4)查看hello-minikube服务
kubectl get services hello-minikube
(5)使用minikube启动一个web browser
kubectl port-forward service/hello-minikube 7080:8080 --address 0.0.0.0
Forwarding from 0.0.0.0:7080 -> 8080 Handling connection for 7080 Handling connection for 7080 Handling connection for 7080
在另一个终端执行
curl -I http://localhost:7080/
HTTP/1.1 200 OK Content-Type: text/plain Date: Thu, 02 Nov 2023 09:18:38 GMT Content-Length: 131
浏览器访问http://搭建minikube的主机ip:7080/ (如果你在虚拟机上运行的话,别忘了关闭虚拟机的防火墙或者配置防火墙规则)

29.2. Tekton
29.2.1. 安装最新版本Tekton Pipelines
kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
kubectl get pods --namespace tekton-pipelines --watch
当输出tekton-pipelines-controller和tekton-pipelines-webhook都为 1/1 Running时即安装成功
NAME READY STATUS RESTARTS AGE tekton-pipelines-controller-6d989cc968-j57cs 0/1 Pending 0 3s tekton-pipelines-webhook-69744499d9-t58s5 0/1 ContainerCreating 0 3s tekton-pipelines-controller-6d989cc968-j57cs 0/1 ContainerCreating 0 3s tekton-pipelines-controller-6d989cc968-j57cs 0/1 Running 0 5s tekton-pipelines-webhook-69744499d9-t58s5 0/1 Running 0 6s tekton-pipelines-controller-6d989cc968-j57cs 1/1 Running 0 10s tekton-pipelines-webhook-69744499d9-t58s5 1/1 Running
按ctrl+c退出监控
29.2.2. task
一个task定义了一个或多个step,每个task作为一个pod运行在集群上,并且其中的每一个step运行在各自的container中。
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: hello
spec:
steps:
- name: echo
image: alpine
script: |
#!/bin/sh
echo "Hello World"
kubectl apply --filename hello-world.yaml
task.tekton.dev/hello created
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: hello-task-run
spec:
taskRef:
name: hello
kubectl apply --filename hello-world-run.yaml
kubectl get taskrun hello-task-run
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME hello-task-run True Succeeded 22h 22h
kubectl logs --selector=tekton.dev/taskRun=hello-task-run
Hello World
30. Docker
30.1. Docker安装
-
系统:centos-stream-9
-
cpu 2
-
内存 4G
1.安装yum-utils
sudo yum install -y yum-utils
2.添加docker软件源
如果你可以直接访问官方网站,那就用这个命令添加官方提供的仓库
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
否则用下面这个阿里云镜像仓库
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
3.安装docker相关的软件包
sudo yum install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
#启动docker,并设置开机自启动docker
sudo systemctl enable docker
sudo systemctl start docker
4.配置image源
如果你有可用的代理服务器,建议配置代理
cat << EOF >/etc/docker/daemon.json
{
"proxies": {
"http-proxy": "http://10.88.33.166:6666",
"https-proxy": "http://10.88.33.166:6668",
"no-proxy": "127.0.0.1,localhost"
}
}
EOF
如果没有的话,按照以下配置镜像源
sudo tee /etc/docker/daemon.json <<EOF
{
"registry-mirrors": [
"https://docker.m.daocloud.io"
]
}
EOF
5.重启docker
#重载配置
sudo systemctl daemon-reload
#重启docker
sudo systemctl restart docker
#运行hello-world image测试是否已经正确安装docker
sudo docker run hello-world
Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/
至此docker已经成功正确安装
30.2. 常用命令
#构建镜像并指定镜像名称和标签
docker build -t testdemo:v1 .
#创建容器
docker run -d --privileged -p 8000(宿主机端口):8080(容器内端口) --name testdemo testdemo:v1
#进入容器
docker exec -it [containerID/containerNAME] /bin/bash
#查看日志
docker logs [containerID/containerNAME]
#日志持续输出到控制台
docker logs -f [containerID/containerNAME]
#输出日志末尾3行
docker logs --tail 3 [containerID/containerNAME]
#导出镜像
docker save -o hello-world.tar hello-world:latest
#载入镜像
docker load -i hello-world.tar
#docker load命令不会自动为导入的镜像创建标签,所以使用docker tag命令为镜像添加标签
docker tag <导入的镜像ID> hello-world:latest
30.3. docker_image_pusher
reference: https://zhuanlan.zhihu.com/p/704142383
30.3.1. 配置阿里云docker镜像仓库
登录到阿里云容器镜像服务: https://cr.console.aliyun.com/
1.点击个人实例

2.点击命名空间

3.创建命名空间

4.设置凭证

在访问凭证中设置固定密码,我们需要记住密码,以及此处的用户名和仓库地址,后边会用到。 |
30.3.2. 使用docker_image_pusher
使用Github Action将国外的Docker镜像转存到阿里云私有仓库
1.fork项目

2.配置环境变量
点击 Settings → Secret and variables → Actions → New Repository secret,依次将我们之前配置的阿里云容器镜像服务的值配置到此处:
-
ALIYUN_NAME_SPACE: 命名空间
-
ALIYUN_REGISTRY: 仓库地址
-
ALIYUN_REGISTRY_PASSWORD: 密码
-
ALIYUN_REGISTRY_USER:用户名

3.转存镜像
修改项目的images.txt文件,将需要转存的镜像添加上去,提交代码。

提交之后会自动进入Github Action构建,如下图所示则为构建成功了

4.查看镜像
转存成功后我们在阿里云容器镜像服务 https://cr.console.aliyun.com/ 中就可以看到我们转存的镜像了

点击仓库名称即可查看使用方法

30.4. 代理配置
30.4.1. 方法一:在配置文件中设置
cat << EOF >/etc/docker/daemon.json
{
"proxies": {
"http-proxy": "http://10.88.33.14:6666",
"https-proxy": "http://10.88.33.14:6668",
"no-proxy": "127.0.0.0/8,192.168.49.2"
}
}
EOF
#重启docker
systemctl restart docker
#运行hello-world镜像,测试代理是否生效
docker run hello-world
10.88.33.14:6666 这里是你的代理服务器的ip和端口。 192.168.49.2 是局域网内的一个ip,在此处是作者使用minikube产生的。可根据实际情况添加,这用于指明这个ip不需要代理 |
30.4.2. 方法二:配置环境变量
export HTTP_PROXY=http://10.88.33.14:6666
export HTTPS_PROXY=http://10.88.33.14:6668
export NO_PROXY="192.168.49.2,localhost,127.0.0.1"
31. Nginx
31.1. proxy_pass
cat <<EOF >> /etc/hosts
127.0.0.1 test.testdemo.com
EOF
vim /etc/nginx/nginx.conf
#添加一个server模块测试
server {
listen 80;
server_name test.testdemo.com;
# 转发请求到 http://www.example.com
location / {
proxy_pass http://www.baidu.com;
}
}
nginx -t && nginx -s reload
浏览器访问 [http://test.nginx.com]
页面将自动跳转到百度首页
31.2. try_files
cat <<EOF >> /etc/hosts
127.0.0.1 test.testdemo.com
EOF
在/data/images/目录下存放一张图片123.png
vim /etc/nginx/nginx.conf
#添加一个server模块测试
server {
location /images/ {
root /data;
try_files $uri $uri/ /images/index.html;
}
}
nginx -t && nginx -s reload
因为uri:123.png存在,可正常访问资源,此时完整的url是 http://test.nginx.com/images/123.png |
将依次访问http://test.nginx.com/images/123.pn,123.pn不存在,则按照try_files中的顺序,访问http://test.nginx.com/images/123.pn/依然不存在,最后访问http://test.nginx.com/images/index.html |
页面将自动跳转到百度首页
31.3. rewrite
cat <<EOF >> /etc/hosts
127.0.0.1 test.testdemo.com
EOF
vim /etc/nginx/nginx.conf
#添加一个server模块 测试
server {
location / {
rewrite ^/(.*) http://www.baidu.com;
}
}
nginx -t && nginx -s reload
-
浏览器访问 http://test.testdemo.com
-
页面将自动跳转到百度首页
-
-
浏览器访问 http://test.testdemo.com/data/images/123.png
-
页面也自动跳转到百度首页
-
任何以“/”开头的uri都将自动跳转
语法 rewrite regex replacement [flag];
|
31.4. return
cat <<EOF >> /etc/hosts
127.0.0.1 test.testdemo.com
EOF
vim /etc/nginx/nginx.conf
#添加一个server模块测试
server {
location / {
root /data/images/;
return 200;
}
}
nginx -t && nginx -s reload
页面将返回200状态码
32. java
32.1. java安装
32.1.1. linux(centos-stream9)
#安装java8
yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel java-1.8.0-openjdk-headless
#查看当前java环境
java -version
#官网下载地址(需在网页登录点击手动下载)
https://download.oracle.com/otn/java/jdk/8u421-b09/d8aa705069af427f9b83e66b34f5e380/jdk-8u421-linux-x64.tar.gz
#解压安装包
tar zxvf jdk-8u421-linux-x64.tar.gz
#将解压后的文件移动到/usr/local/下,并改名为java
mv jdk1.8.0_421 /usr/local/java
#编译全局配置文件
vi /etc/profile
export JAVA_HOME=/usr/local/java
export JRE_HOME=/usr/local/java/jre
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME_bin
export CLASSPATH=.:$JAVA_HOME/lib/jt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
#执行/etc/profile中所有命令
source /etc/profile
#查看java版本以验证配置是否生效
java -version
#安装java11
yum install -y java-11-openjdk java-11-openjdk-devel java-11-openjdk-headless
#查看当前java环境
java -version
#安装java17
yum install -y java-17-openjdk java-17-openjdk-devel java-11-openjdk-headless
#查看当前java环境
java -version
32.1.2. windows10
一、下载jdk
下载地址: https://www.oracle.com/java/technologies/downloads/ 下载.msi后缀的文件
二、运行安装程序
运行安装程序即可
三、 配置环境变量:
在 "系统变量" 中设置 3 项属性,JAVA_HOME、PATH、CLASSPATH(大小写无所谓),若已存在则点击"编辑",不存在则点击"新建"。
此电脑→右键→属性→关于→高级系统设置→高级→环境变量
变量名:JAVA17 变量值:C:\Program Files (x86)\Java\jdk17 // 要根据自己的实际路径配置 变量名:JAVA_HOME 变量值:%JAVA17% 变量名:CLASSPATH 变量值:.;%JAVA_HOME%\lib\dt.jar;%JAVA_HOME%\lib\tools.jar; //记得前面有个"." 变量名:Path 变量值:%JAVA_HOME%\bin;%JAVA_HOME%\jre\bin;
四、 测试JDK是否安装成功
"开始"→"运行",键入"cmd"
键入命令: java -version、java、javac 几个命令,出现信息,说明环境变量配置成功;
32.2. java环境切换
如果您从CentOS官方存储库安装了多个版本的Java,则可以使用 alternatives
来切换它们。
[root@dlp ~]# alternatives --config java
There are 3 programs which provide 'java'.
Selection Command
-----------------------------------------------
1 java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.332.b09-1.el9.x86_64/jre/bin/java)
*+ 2 java-11-openjdk.x86_64 (/usr/lib/jvm/java-11-openjdk-11.0.15.0.10-1.el9.x86_64/bin/java)
3 java-17-openjdk.x86_64 (/usr/lib/jvm/java-17-openjdk-17.0.3.0.7-1.el9.x86_64/bin/java)
Enter to keep the current selection[+], or type selection number: 3
[root@dlp ~]# alternatives --config javac
There are 3 programs which provide 'javac'.
Selection Command
-----------------------------------------------
1 java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.332.b09-1.el9.x86_64/bin/javac)
*+ 2 java-11-openjdk.x86_64 (/usr/lib/jvm/java-11-openjdk-11.0.15.0.10-1.el9.x86_64/bin/javac)
3 java-17-openjdk.x86_64 (/usr/lib/jvm/java-17-openjdk-17.0.3.0.7-1.el9.x86_64/bin/javac)
Enter to keep the current selection[+], or type selection number: 3
[root@dlp ~]# java --version
openjdk 17.0.3 2022-04-19 LTS
OpenJDK Runtime Environment 21.9 (build 17.0.3+7-LTS)
OpenJDK 64-Bit Server VM 21.9 (build 17.0.3+7-LTS, mixed mode, sharing)
[root@dlp ~]# javac --version
javac 17.0.3
32.3. gradle换源
项目目录→gradle→wrapper→gradle-wrapper.properties
更改 distributionUrl
更改后:
distributionUrl=https\://mirrors.cloud.tencent.com/gradle/gradle-xx-xx
①
①处根据自身选择合适版本 |
最后打开终端,进入project所在的目录
./gradlew bootRun
.\gradlew.bat bootRun
Downloading https://mirrors.aliyun.com/macports/distfiles/gradle/gradle-7.4.2-bin.zip ...........10%...........20%...........30%...........40%...........50%...........60%...........70%...........80%...........90%...........100% Starting a Gradle Daemon, 1 busy and 1 stopped Daemons could not be reused, use --status for details........... ..... .....
33. kvm
33.1. kvm虚拟机使用usb
lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 002: ID 30fa:0400 USB OPTICAL MOUSE Bus 003 Device 003: ID 30c9:0069 Luxvisions Innotech Limited HP Wide Vision HD Camera Bus 003 Device 004: ID 8087:0033 Intel Corp. AX211 Bluetooth Bus 004 Device 001: ID 03f0:1617 Linux Foundation 3.0 root hub
将以下内容添加到虚拟机.xml文件的"devices" block
<hostdev mode='subsystem' type='usb'> <source startupPolicy='optional'> <vendor id='0x03f0'/> <product id='0x1617'/> </source> </hostdev>
然后
virsh undefine 虚拟机名字
virsh define 虚拟机.xml
34. 开机自动执行脚本
34.1. 方法一:修改 /etc/rc.d/rc.local 文件
/etc/rc.d/rc.local 文件会在 Linux 系统各项服务都启动完毕之后再被运行。所以你想要自己的脚本在开机后被运行的话,可以将自己脚本路径加到该文件里。
但是,首先需要确认你有运行这个文件的权限。
#添加执行权限
chmod +x /etc/rc.d/rc.local
#测试脚本auto_run_script.sh:在家目录下写入日期
cat <<EOF > /root/auto_run_script.sh
#!/bin/bash
date >> /root/test.txt
EOF
#赋予可执行权限:
chmod +x /root/auto_run_script.sh
#将auto_run_script.sh追加到/etc/rc.d/rc.local
cat <<EOF >> /etc/rc.d/rc.local
/root/auto_run_script.sh
EOF
接下来,我们就可以试试效果了。直接重启系统就可以了:
$ sudo reboot
重启之后,就会在家目录下看到脚本执行的结果了。
cat /root/test.txt
34.2. 方法二:使用 crontab
crontab 是 Linux 下的计划任务,当时间达到我们设定的时间时,可以自动触发某些脚本的运行。 我们可以自己设置计划任务时间,然后编写对应的脚本。但是,有个特殊的任务,叫作 @reboot ,我们其实也可以直接从它的字面意义看出来,这个任务就是在系统重启之后自动运行某个脚本。
crontab -e
#输入需要执行的脚本位置
@reboot /root/auto_run_script.sh
35. ftp
35.1. FTP站点搭建
-
操作系统:centos-stream9
-
vsftpd:3.0.3
35.1.1. 步骤一:安装vsftpd
#安装并启动vsftpd服务
dnf install -y vsftpd
systemctl enable vsftpd.service
systemctl start vsftpd.service
执行
![]() |
#安装net-tools
yum install -y net-tools
#查看FTP服务监听的端口
netstat -antup | grep ftp
[root@iZbp14h7n3cwipjln62**** ~]# netstat -antup | grep ftp tcp6 0 0 :::21 :::* LISTEN 5870/vsftpd
此时,vsftpd默认已开启本地用户模式,您还需要继续进行配置才能正常使用FTP服务。
35.1.2. 步骤二:配置vsftpd
为保证数据安全,本文主要介绍被动模式下,使用本地用户访问FTP服务器的配置方法。
#运行以下命令为FTP服务创建一个Linux用户。本示例中,该用户名为ftptest。
adduser ftptest
#运行以下命令,修改ftptest用户的密码。
passwd ftptest
#创建一个供FTP服务使用的文件目录。本示例以/var/ftp/test为例。
mkdir /var/ftp/test
#创建测试文件。
touch /var/ftp/test/testfile.txt
#运行以下命令,更改/var/ftp/test目录的拥有者为ftptest。
chown -R ftptest:ftptest /var/ftp/test
#修改vsftpd.conf配置文件。
vim /etc/vsftpd/vsftpd.conf
配置FTP服务器为被动模式。
#禁止匿名登录FTP服务器。 anonymous_enable=NO #允许本地用户登录FTP服务器。 local_enable=YES #监听IPv4 sockets。 listen=YES
#listen_ipv6=YES
#设置本地用户登录后所在目录。 local_root=/var/ftp/test #全部用户被限制在主目录。 chroot_local_user=YES #启用例外用户名单。 chroot_list_enable=YES #指定例外用户列表文件,列表中用户不被锁定在主目录。 chroot_list_file=/etc/vsftpd/chroot_list #开启被动模式。 pasv_enable=YES allow_writeable_chroot=YES #本教程中为Linux实例的公网IP。 pasv_address=<FTP服务器公网IP地址> #设置被动模式下,建立数据传输可使用的端口范围的最小值。 #建议您把端口范围设置在一段比较高的范围内,例如50000~50010,有助于提高访问FTP服务器的安全性。 pasv_min_port=50000 #设置被动模式下,建立数据传输可使用的端口范围的最大值。 pasv_max_port=50010
保存并关闭文件
vim /etc/vsftpd/chroot_list
没有例外用户时,也必须创建chroot_list文件,内容可为空。 输入例外用户名单。此名单中的用户不会被锁定在主目录,可以访问其他目录。 |
systemctl restart vsftpd.service
35.1.3. 步骤三:新增防火墙规则
firewall-cmd --zone=public --add-port=21/tcp --permanent
firewall-cmd --zone=public --add-port=50000-50010/tcp --permanent
firewall-cmd --reload
35.1.4. 步骤四:客户端测试
FTP客户端、Windows命令行工具或浏览器均可用来测试FTP服务器。本文以Win10系统的本地主机作为FTP客户端,介绍FTP服务器的访问步骤。 在本地主机中,打开文件管理器。
在地址栏中输入ftp://<FTP服务器公网IP地址>:FTP端口 在弹出的登录身份对话框中,输入已设置的FTP用户名和密码,然后单击登录。

登录后,您可以查看到FTP服务器指定目录下的文件,例如:测试文件testfile.txt。

36. gost
36.1. 国外服务器
-
操作系统:centos-stream 9
-
内存:2G
-
cpu: 2
-
ip: 47.242.55.180
1.国外服务器安装gost
cd ~
wget https://github.com/go-gost/gost/releases/download/v3.0.0-rc6/gost_3.0.0-rc6_linux_amd64v3.tar.gz -O gost_3.0.0-rc6_linux_amd64v3.tar.gz
tar xf gost_3.0.0-rc6_linux_amd64v3.tar.gz -C /usr/local/bin
ls -l /usr/local/bin/gost
gost -h
老旧CPU使用 gost_3.0.0-rc6_linux_amd64.tar.gz 代替(不带v3标识) |
2.国外服务器隧道搭建
cat << EOF > /etc/gost.yaml
services:
- name: gost-relay-service
addr: :35462
handler:
type: relay
auth:
username:
password: eethi6ohjuuQueen3omu
listener:
type: tls
log:
level: info
format: text
output: /var/log/gost/error.log
rotation:
maxSize: 100
maxAge: 4
maxBackups: 3
localTime: true
compress: true
EOF
cat << EOF > /usr/lib/systemd/system/gost.service
[Unit]
Description=GO Simple Tunnel
After=network.target remote-fs.target nss-lookup.target
[Service]
Type=simple
ExecStart=/usr/local/bin/gost -C /etc/gost.yaml
[Install]
WantedBy=multi-user.target
EOF
systemctl enable gost
systemctl start gost
systemctl status gost
如果购买的是阿里云的服务器,需要修改安全组,否则35462端口不能访问。 配置结果: ![]() 修改安全组规则后无须restart gost也能生效。 |
在本例中
|
36.2. 落地客户端
-
操作系统:centos-stream 9
-
内存:2G
-
cpu: 2
-
ip: 192.168.182.128
1.在落地客户端(本地虚拟机)安装GOST
由于本地虚拟机访问github很慢,所以直接将国外服务器上的文件传输到本地
cd ~
scp root@47.242.55.180:~/gost_3.0.0-rc6_linux_amd64v3.tar.gz .
tar xf gost_3.0.0-rc6_linux_amd64v3.tar.gz -C /usr/local/bin
ls -l /usr/local/bin/gost
gost -h
2.创建系统服务
#下面ip:47.242.55.180仅适用本例,自己配置时应改为自己国外服务器的ip
cat << EOF > /usr/lib/systemd/system/gost.service
[Unit]
Description=GO Simple Tunnel
After=network.target remote-fs.target nss-lookup.target
[Service]
Type=simple
ExecStart=/usr/local/bin/gost -L=socks5://:1080 -F=relay+tls://:eethi6ohjuuQueen3omu@47.242.55.180:35462
[Install]
WantedBy=multi-user.target
EOF
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld
systemctl enable gost
systemctl start gost
systemctl status gost
- 隧道协议
-
Socks5
- 服务器ip
-
192.168.182.128
- 隧道端口
-
1080
在chrome浏览器安装SwitchyOmega插件,并新建情景模式,配置代理协议如下

37. network
37.1. ip addr
37.1.1. 增加ip
#比如给eth0增加一个172.25.21.1/24 地址
ip addr add 172.25.21.1/24 dev eth0
37.1.2. 删除指定ip
ip addr del 172.25.21.1 dev eth0
37.1.3. 给网卡起别名
#起别名相当于给网卡多绑定了一个ip
ip addr add 172.25.21.1/32 dev eth0 label eth0:1
37.1.4. 删除别名
ip addr del 172.25.21.1/32 dev eth0
37.1.5. 更改ip和网关
#编辑文件
vim /etc/NetworkManager/system-connections/ens160.nmconnection
#重启NetworkManager生效
systemctl restart NetworkManager
37.2. 动态ip
-
操作系统:centos-stream 9
#启用DHCP
nmcli connection modify "ens160" ipv4.method auto
#启用配置文件
nmcli connection up "ens160"
37.3. 静态ip
-
操作系统:centos-stream 9
nmcli connection show
# nmcli connection show NAME UUID TYPE DEVICE ens160 a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0
ens160
是连接配置文件的文件名,不同主机可能不同
nmcli connection modify "ens160" ipv4.method manual ipv4.addresses 192.168.182.10/24 ipv4.gateway 192.168.182.2 ipv4.dns 61.139.2.69
|
启用配置文件
nmcli connection up "ens160"
37.4. iftop
37.4.1. iftop安装
yum install -y epel-release
yum install -y iftop
37.4.2. iftop使用
iftop

中间的⇐ ⇒这两个左右箭头,表示的是流量的方向。 TX:发送流量 RX:接收流量 TOTAL:总流量 Cumm:运行iftop到目前时间的总流量 peak:流量峰值 rates:分别表示过去 2s 10s 40s 的平均流量
- 相关参数
-
-
-i: 设定监测的网卡,如:# iftop -i eth1
-
-n: 使host信息默认直接都显示IP,如:# iftop -n
-
-F: 显示特定网段的进出流量,如# iftop -F 10.10.1.0/24或# iftop -F 10.10.1.0/255.255.255.0
-
-P: 使host信息及端口信息默认就都显示
-
- iftop界面的操作命令
-
-
S: 是否显示本机的端口信息
-
D: 是否显示远端目标主机的端口信息
-
t: 显示格式为2行/1行/只显示发送流量/只显示接收流量
-
n: 显示本机的IP或主机名
-
h: 帮助
-
q: 退出监控
-
37.5. netcat
37.5.1. netcat安装
yum install -y epel-release
yum install -y netcat
37.5.2. 常用命令
#检查 example.com 的 80 端口是否开放。
nc -zv example.com 80
#检查 example.com 的1-100端口开放情况。
nc -zv -w2 example.com 1-100
-
-z Zero-I/O mode,零输入输出模式,常用于端口扫描。
-
-v Verbose mode,显示详细信息
-
-w 连接超时的时间,也是网络操作信息读取的超时时间(-w2即设置时间为2秒)
37.6. 端口详情
#查找所用正在使用的端口
netstat -ano
#查看被占用端口对应的 PID
netstat -aon|findstr "8081"
#查看指定 PID 的进程
tasklist|findstr "9088"
#强制(/F参数)杀死 pid 为 9088 的所有进程包括子进程(/T参数)
taskkill /T /F /PID 9088
lsof -i :80
38. chrony
38.1. 安装配置
#安装chrony
yum install -y chrony
#启动chrony服务
systemctl start chronyd
#设置开机自启
systemctl enable chronyd
#查看chrony服务状态
systemctl status chronyd
38.2. 使用方法
38.2.1. 关闭 firewalld防火墙???
38.2.2. 设置时区
timedatectl
timedatectl list-timezones | grep -E "Asia/S.*"
timedatectl set-timezone "Asia/Shanghai"
chronyc -a makestep
chronyc sources -v
38.2.3. 配置chrony
vim /etc/chrony.conf
阿里云ntp服务器文档: https://help.aliyun.com/document_detail/92704.html
//注释掉默认的四个ntp服务器,因为该服务器同步时间略慢 #server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst /** * 格式为:server 服务器ip地址 iburst * 添加阿里云的ntp服务器,可以多写几个ntp服务器,防止第一个服务器宕机,备用的其他ntp服务器可以继续进行时间同步 * ip地址为服务器ip地址,iburst代表的是快速同步时间 **/ server ntp1.aliyun.com iburst server ntp2.aliyun.com iburst server ntp3.aliyun.com iburst
//重启chrony服务
systemctl restart chronyd
// 查看chrony服务状态
systemctl status chronyd
如果想要修改系统时间,需要先关闭NTP时间同步服务,再去修改系统时间,最后再开启NTP时间同步服务,步骤如下:
timedatectl set-ntp flase
timedatectl set-time "2021-08-15 15:30:20"
#当然也可以修改其中一部分,如修改年月日
timedatectl set-time "2021-08-15"
# 或者时分秒
timedatectl set-time "15:30:20"
timedatectl set-ntp true
39. TOTP
基于 CentOS 7
以下是一个可用的双因素认证进行登录 Linux 的配置教程,使用 Google Authenticator 进行基于时间的一次性密码(TOTP)认证:
yum install -y epel-release
39.1. 安装 Google Authenticator:
sudo yum install google-authenticator
39.2. 生成密钥并配置 Google Authenticator:
运行以下命令生成密钥,并按照提示进行配置:
google-authenticator
此命令将提示您生成密钥、二维码以及备用验证码,以便将其添加到 Google Authenticator 应用程序中。
39.3. 安装 PAM 模块:
sudo yum install pam-devel
39.4. 备份重要文件:
在进行任何更改之前,请务必备份 /etc/pam.d/sshd 和 /etc/ssh/sshd_config 文件,以防配置错误导致无法登录。
cp /etc/pam.d/sshd /etc/pam.d/sshd.bak
cp /etc/ssh/sshd_config /etc/ssh/sshd_config.bak
39.5. 配置 PAM:
编辑 /etc/pam.d/sshd 文件(用于 SSH 登录),在文件末尾添加以下行:
auth required pam_google_authenticator.so
或者,如果您希望所有登录方式都需要双因素认证,您可以编辑 /etc/pam.d/login 文件。
39.6. 重启 SSH 服务:
sudo systemctl restart sshd
39.7. 启用 SSH 配置:
编辑 SSH 配置文件 /etc/ssh/sshd_config,确保以下设置启用了身份验证因子:
ChallengeResponseAuthentication yes
UsePAM yes
39.8. 测试配置:
重新启动 SSH 服务后,尝试使用 SSH 登录到您的系统。您应该首先输入用户名和密码,然后系统应提示您输入 Google Authenticator 应用程序中生成的验证码。Google Authenticator在手机应用商店下载
以上步骤将使您的 Linux 系统启用双因素认证,要求用户在登录时提供用户名、密码以及 Google Authenticator 生成的验证码。请确保您对系统的配置和安全性有充分的了解,并在生产环境之前进行测试。
40. 用户和组管理
40.1. 常用命令
将一个用户添加到用户组中,千万不能直接用: usermod -G groupA 这样做会使你离开其他用户组,仅仅做为 这个用户组 groupA 的成员。
cat /etc/passwd
#或者
getent passwd
cat /etc/shadow
#将一个用户添加到指定组中
usermod -aG groupA user1
-
-a:代表 append, 也就是将自己添加到用户组groupA中,而不必离开其他用户组。
-
-G:后面跟一个组名,将用户添加到这个组中
cat /etc/group
#或者
getent group
gpasswd -d user1 groupA
41. rsync
41.1. 安装
在使用rsync进行传输的两台电脑上都需要安装rsync
yum install -y rsync
41.2. 使用实例
rsync -av /source/directory /destination/directory
|
rsync -avz /source/directory user@remotehost:/destination/directory
|
rsync -av --exclude='filepattern' /source/directory /destination/directory
使用—exclude选项可以排除匹配filepattern的文件或目录。可以多次使用—exclude来排除多个项目。 |
rsync -av --delete /source/directory /destination/directory
--delete选项会删除掉在源目录中不存在于目标目录中的文件 |
42. Maven
42.1. 安装配置
42.1.1. 安装maven
-
Maven 3.9.6
Apache Maven 的安装是一个简单的过程,只需解压文件并将含有 mvn
命令的bin目录添加到 PATH 即可。
- System Requirements
-
-
JDK JDK8以上
-
wget https://dlcdn.apache.org/maven/maven-3/3.9.6/binaries/apache-maven-3.9.6-bin.tar.gz -O /root/test/apache-maven-3.9.6-bin.tar.gz
或者浏览器下载,下载地址:https://maven.apache.org/download.cgi
tar xzvf /root/test/apache-maven-3.9.6-bin.tar.gz -C /root/
cat << EOF >> /root/.bashrc
export PATH="/usr/local/apache-maven-3.9.6/bin:\$PATH"
EOF
加载配置文件
source /root/.bashrc
测试是否可以正常使用Maven
mvn -v
输出应该类似于如下内容
Apache Maven 3.9.6 (bc0240f3c744dd6b6ec2920b3cd08dcc295161ae) Maven home: /usr/local/apache-maven-3.9.6 Java version: 17.0.6, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-17-openjdk-17.0.6.0.10-3.el9.x86_64 Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "5.14.0-391.el9.x86_64", arch: "amd64", family: "unix"
42.1.2. 全局源
cat << EOF > ~/.m2/settings.xml
<settings>
<mirrors>
<mirror>
<id>aliyun</id>
<name>Aliyun Central</name>
<url>http://maven.aliyun.com/nexus/content/groups/public/</url>
<mirrorOf>central</mirrorOf>
</mirror>
</mirrors>
</settings>
EOF
42.1.3. 项目换源
方法一:
项目目录/.mvn/wrapper/maven-wrapper.properties
更改 distributionUrl
和 wrapperUrl
distributionUrl=https://maven.aliyun.com/repository/central/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.zip wrapperUrl=https://maven.aliyun.com/repository/central/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar # distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.zip # wrapperUrl=https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar
方法二:
settings.xml路径可使用everything搜索: 参考路径: D:\Installations\IDEA\IntelliJ IDEA 2023.3.2\plugins\maven\lib\maven3\conf\settings.xml |
42.2. 使用方法
42.2.1. Build Lifecycle
- mvn clean
-
清除
- mvn compile
-
编译,compile the source code of the project
- mvn validate
-
验证mvn配置,validate the project is correct and all necessary information is available
- mvn test
-
运行单元测试,test the compiled source code using a suitable unit testing framework. These tests should not require the code be packaged or deployed
- mvn verify
-
验证编译后的程序,run any checks on results of integration tests to ensure quality criteria are met
- mvn install
-
安装应用到maven目录,install the package into the local repository, for use as a dependency in other projects locally
- mvn deploy
-
编译,done in the build environment, copies the final package to the remote repository for sharing with other developers and projects.
- mvn package
-
打包,take the compiled code and package it in its distributable format, such as a JAR.
- mvn compile war:war
-
Build a WAR file.
- mvn war:exploded
-
Create an exploded webapp in a specified directory.
- 启动springboot
-
mvn spring-boot:run
- 使用Java启动jar包
-
java -jar target/accessing-data-jpa-0.0.1-SNAPSHOT.jar
43. grep,sed,awk
43.1. grep
grep全称是Global Regular Expression Print
搜索成功,则返回0,如果搜索不成功,则返回1,如果搜索的文件不存在,则返回2
egrep = grep -E
语法
grep [OPTION...] PATTERNS [FILE...]
43.1.1. 常用选项::
-
-i:忽略大小写进行匹配。
-
-v:反向查找,只打印不匹配的行。
-
-n:显示匹配行的行号。
-
-r:递归查找子目录中的文件。
-
-l:只打印匹配的文件名。
-
-c:只打印匹配的行数。
-
-e:实现多个选项间的逻辑or 关系
43.2. sed
sed 主要用来自动编辑一个或多个文件
语法 sed [options] 'command' file(s)
sed [options] -f scriptfile file(s)
44. mysql
44.1. 常用命令
可以自己登录到自己的mysql账户创建数据库,也可以让超级管理员创建好数据库之后,超级管理员再设置权限,让你直接使用。
44.1.1. 创建并使用数据库
SHOW DATABASES;
CREATE DATABASE menagerie;
USE menagerie;
44.1.2. 创建表
SHOW TABLES;
CREATE TABLE pet (
name VARCHAR(20),
owner VARCHAR(20),
species VARCHAR(20),
sex CHAR(1),
birth DATE,
death DATE);
DESCRIBE pet;
44.1.3. 将数据装入表中
创建表后,需要填入内容。通过LOAD DATA和INSERT语句可以完成该任务。 假定你的宠物记录如下:
name |
owner |
species |
sex |
birth |
death |
Fluffy |
Harold |
cat |
f |
1993-02-04 |
|
Claws |
Gwen |
cat |
m |
1994-03-17 |
|
Buffy |
Harold |
dog |
f |
1989-05-13 |
|
Fang |
Benny |
dog |
m |
1990-08-27 |
|
Bowser |
Diane |
dog |
m |
1979-08-31 |
1995-07-29 |
Chirpy |
Gwen |
bird |
f |
1998-09-11 |
|
Whistler |
Gwen |
bird |
1997-12-09 |
||
Slim |
Benny |
snake |
m |
1996-04-29 |
因为你是从一个空表开始的,填充它的一个简易方法是创建一个文本文件,每个动物各一行,然后用一个语句将文件的内容装载到表中。
你可以创建一个文本文件“pet.txt”,每行包含一个记录,用定位符(tab)把值分开,并且以CREATE TABLE语句中列出的列次序给出。对于丢失的值(例如未知的性别,或仍然活着的动物的死亡日期),你可以使用NULL值。为了在你的文本文件中表示这些内容,使用\N(反斜线,字母N)。例如,Whistler鸟的记录应为(这里值之间的空白是一个定位符):
name |
owner |
species |
sex |
birth |
death |
Whistler |
Gwen |
bird |
\N |
1997-12-09 |
\N |
LOAD DATA LOCAL INFILE '/root/pet.txt' INTO TABLE pet;
如果遇到以下问题: `Loading local data is disabled; this must be enabled on both the client and server side` 先退出当前登录
登录MySQL Server,通常是root为超级管理员
设置local_infile参数为1
再退出当前登录
使用参数—local-infile=1登录对应用户user。
之后再执行
|
请注意如果用Windows中的编辑器(使用\r\n做为行的结束符)创建文件,应使用:
|
如果想要一次增加一个新记录,可以使用INSERT语句。最简单的形式是,提供每一列的值,其顺序与CREATE TABLE语句中列的顺序相同。假定Diane把一只新仓鼠命名为Puffball,你可以使用下面的INSERT语句添加一条新记录:
mysql> INSERT INTO pet
VALUES ('Puffball','Diane','hamster','f','1999-03-30',NULL);
44.2. 从表中检索信息
选择所有数据
mysql> SELECT * FROM pet;
选择特殊行 mysql> SELECT * FROM pet WHERE name = 'Bowser';
mysql中文官网: https://www.mysqlzh.com/doc/27/45.html
45. epel
45.1. epel换源
45.1.1. 备份
mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup
mv /etc/yum.repos.d/epel-testing.repo /etc/yum.repos.d/epel-testing.repo.backup
45.1.2. 下载新repo
#安装epel配置包
yum install -y https://mirrors.aliyun.com/epel/epel-release-latest-8.noarch.rpm
#将repo配置中的地址替换为阿里云镜像站地址
sed -i 's|^#baseurl=https://download.example/pub|baseurl=https://mirrors.aliyun.com|' /etc/yum.repos.d/epel*
sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel*
#安装epel配置包
wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo
46. rpm
46.1. rpm
46.1.2. rpm 常用命令
rpm -ivh xxxx.rpm
#或者
rpm -Uvh xxxx.rpm
-
-i install
-
-U upgrade
rpm -e <packagename>
#或者
rpm -e --nodeps <packagename>
-
-e erase (uninstall) package
-
--nodeps 不验证包的依赖关系。--nodeps 选项的作用是强制卸载指定的软件包,即使该软件包可能是其他软件包的依赖。
rpm -qa
rpm -qf /usr/sbin/sshd
rpm -ql openssh-server
rpm -qc openssh-server
rpm -qs openssh-server
47. 电脑基本信息查看(待完成)
47.1. 硬件信息
lscpu
#或
cat /proc/cpuinfo
关注型号、核心数、线程数、频率等 |
free -h
#或
cat /proc/meminfo
关注总内存、可用内存等。 |
lsblk、df -h、fdisk -l
#或
parted -l
关注硬盘型号、容量、分区情况、挂载点等。 |
ip a
ifconfig
lspci | grep -i net
#或
ethtool
关注网卡型号、IP 地址、MAC 地址、网络配置等。 |
dmidecode | grep -A 9 "System Information"
#或
dmidecode | grep -A 26 "BIOS Information"
关注主板型号、BIOS/UEFI 版本等。 |
47.2. 软件信息
uname -a
lsb_release -a
cat /etc/os-release
发行版名称、版本号、内核版本等。 |
rpm -qa
关注关键软件和服务的版本信息。 |
systemctl list-units --type=service
#或
ps aux
关注当前运行的服务、启动服务的状态等。 |
47.3. 网络配置(待完成)
47.4. 系统日志和监控
用户和权限
安全配置
备份和恢复
自动化和脚本
性能和负载
网络性能
系统配置和参数
安全和访问控制
容器和虚拟化
环境配置和依赖
日志轮转和日志管理是什么
48. 磁盘管理
===
49. Openssh升级
49.1. Openssh
等保测评中,旧版本openssh有较多漏洞,通过升级openssh服务解决 截止2025-08-21,最新稳定的大版本是openssh-10.0,本文档中安装的是openssh-10.0p2
49.1.1. 一、备份当前SSH配置
[[ -f /etc/ssh/sshd_config ]] && mv /etc/ssh/sshd_config /etc/ssh/sshd_config.$(date +%Y%m%d)
49.1.2. 二、安装rpm包
上传zip压缩包 下载地址: .安装
yum --disablerepo=* localinstall -y openssh-server-10.0p2-1.el7.x86_64.rpm
yum --disablerepo=* localinstall -y openssh-clients-10.0p2-1.el7.x86_64.rpm
yum --disablerepo=* localinstall -y openssh-10.0p2-1.el7.x86_64.rpm
yum --disablerepo=* localinstall -y openssh-askpass-10.0p1-1.el7.x86_64.rpm
-
--disablerepo 临时禁用所有仓库,安装时不会联网检查,可以缩短安装时间
-
localinstall 指定安装过程是本地安装
49.1.3. 配置
以防主机密钥文件的权限过于开放,导致服务起不来
chmod -v 600 /etc/ssh/ssh_host_*_key
|
For CentOS7+: 在某些情况下,以前安装的systemd服务文件在升级后还留在磁盘上。
if [[ -d /run/systemd/system && -f /usr/lib/systemd/system/sshd.service ]]; then
mv /usr/lib/systemd/system/sshd.service /usr/lib/systemd/system/sshd.service.$(date +%Y%m%d)
systemctl daemon-reload
fi
ssh -V && /usr/sbin/sshd -V
service sshd restart
50. apt换源
50.1. apt换源
50.1.1. Ubuntu 22.04+
适用于操作系统大于22.04。从 Ubuntu 22.04(jammy)开始,官方逐步启用了新的 *.sources YAML 格式 来代替传统的 /etc/apt/sources.list,真正的配置在 /etc/apt/sources.list.d/ubuntu.sources
sudo cp /etc/apt/sources.list.d/ubuntu.sources /etc/apt/sources.list.d/ubuntu.sources.bak
sudo cat << EOF > /etc/apt/sources.list.d/ubuntu.sources
Types: deb
URIs: http://mirrors.aliyun.com/ubuntu/
Suites: noble noble-updates noble-backports noble-security
Components: main restricted universe multiverse
Signed-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg
EOF
sudo apt update
Hit:1 http://mirrors.aliyun.com/ubuntu noble InRelease Hit:2 http://mirrors.aliyun.com/ubuntu noble-updates InRelease Hit:3 http://mirrors.aliyun.com/ubuntu noble-backports InRelease Hit:4 http://mirrors.aliyun.com/ubuntu noble-security InRelease Reading package lists... Done Building dependency tree... Done Reading state information... Done ....
51. wsl
51.1. wsl代理配置
在 WSL 里,Windows 主机的 IP 地址不是 127.0.0.1,而是 localhost 或固定网关 172.17.0.1 / 172.18.0.1 / 172.31.0.1 这种。 最稳定的办法是用 Windows 的内部网关
export WIN_IP=$(cat /etc/resolv.conf | grep nameserver | awk '{print $2}')
echo $WIN_IP
这个 WIN_IP 就是 WSL 访问 Windows 的地址。在 WSL 中设置代理(只在当前终端有效)。假设 Windows 上的代理是 127.0.0.1:6666(HTTP),127.0.0.1:6668(HTTPS)你可以这样写:
export http_proxy="http://$WIN_IP:6666"
export https_proxy="http://$WIN_IP:6668"
如果是 SOCKS5 代理,并且端口是6667:
export all_proxy="socks5://$WIN_IP:6667"
测试一下:
curl -I https://www.google.com
52. 杂项
52.1. Chrome
52.1.1. 安装 Proxy SwitchyOmega 扩展
安装 Proxy SwitchyOmega
之前,需要访问Google应用商店,让Chrome挂全局代理:
- Linux
-
google-chrome-stable --proxy-server="socks5://192.168.1.5:1080"
- Windows
-
Google Chrome
快捷方式的属性中,追加程序运行参数,得到"C:\xxxx\chrome.exe" --proxy-server="socks5://192.168.1.5:1080"
然后,安装扩展 Proxy SwitchyOmega
52.1.2. SwitchyOmega实现自动代理切换
52.2. 辨别虚拟机物理机
-
Reference:
52.2.1. linux
root@router:~# dmidecode -s system-product-name
VMware Virtual Platform
root@router:~# dmidecode -s system-product-name
VirtualBox
root@router:~# dmidecode -s system-product-name
KVM
root@router:~# dmidecode -s system-product-name
Bochs
root@router:~# dmidecode | egrep -i 'manufacturer|product'
Manufacturer: Microsoft Corporation
Product Name: Virtual Machine
root@router:~# dmidecode
/dev/mem: Permission denied
root@router:~# dmidecode | grep -i domU
Product Name: HVM domU
在裸机上,这会返回计算机或主板型号的标识。
如果您无权运行 dmidecode 那么您可以使用:
ls -1 /dev/disk/by-id/
[root@host-7-129 ~]# ls -1 /dev/disk/by-id/ ata-QEMU_DVD-ROM_QM00003 ata-QEMU_HARDDISK_QM00001 ata-QEMU_HARDDISK_QM00001-part1 ata-QEMU_HARDDISK_QM00002 ata-QEMU_HARDDISK_QM00002-part1 scsi-SATA_QEMU_HARDDISK_QM00001 scsi-SATA_QEMU_HARDDISK_QM00001-part1 scsi-SATA_QEMU_HARDDISK_QM00002 scsi-SATA_QEMU_HARDDISK_QM00002-part1
52.2.2. windows
Systeminfo | findstr /i "System Model"
如果System Model:后面含有Virutal就是虚拟机,其他都是物理机