首页
Search
1
解决 docker run 报错 oci runtime error
49,385 阅读
2
WebStorm2025最新激活码
27,669 阅读
3
互点群、互助群、微信互助群
22,785 阅读
4
常用正则表达式
21,570 阅读
5
罗技鼠标logic g102驱动程序lghub_installer百度云下载windows LIGHTSYNC
19,475 阅读
自习室
CODER
课程
SEO
学习视频
手册资料
呆萌
工具软件
运维
DBA
互通有无
资源
微信群
激活工具
搞钱日记
养生记
包罗万象
Search
标签搜索
DeepSeek
学习指北
Prompt
提示词
Loong
累计撰写
182
篇文章
累计收到
0
条评论
首页
栏目
自习室
CODER
课程
SEO
学习视频
手册资料
呆萌
工具软件
运维
DBA
互通有无
资源
微信群
激活工具
搞钱日记
养生记
包罗万象
页面
搜索到
182
篇与
的结果
2019-05-11
centos 搭建Nginx 负载均衡
搭建Nginx 负载均衡一、安装如下环境yum -y install make gcc gcc-c++ gcc-g77 flex bison file libtool libtool-libs autoconf kernel-devel libjpeg libjpeg-devel libpng libpng-devel libpng10 libpng10-devel gd gd-devel freetype freetype-devel libxml2 libxml2-devel zlib zlib-devel glib2 glib2-devel bzip2 bzip2-devel libevent libevent-devel ncurses ncurses-devel curl curl-devel e2fsprogs e2fsprogs-devel krb5 krb5-devel libidn libidn-devel openssl openssl-devel gettext gettext-devel ncurses-devel gmp-devel pspell-devel unzip libcap lsof编译pcre的包tar zxf pcre-8.31.tar.gzcd pcre-8.31./configuremake && make installuseradd -s /sbin/nologno -g nginx -M nginxtar zxf nginx-1.10.2.tar.gzcd nginx-1.10.2./configure –prefix=/usr/local/nginx –sbin-path=/usr/local/nginx/bin/nginx –conf-path=/usr/local/nginx/conf/nginx.conf –error-log-path=/var/log/nginx/error.log –http-log-path=/var/log/nginx/access.log –pid-path=/var/run/nginx/nginx.pid –lock-path=/var/lock/nginx.lock –user=nginx –group=nginx –with-http_ssl_module –with-http_flv_module –with-http_stub_status_module –with-http_gzip_static_module –http-client-body-temp-path=/var/tmp/nginx/client/ –http-proxy-temp-path=/var/tmp/nginx/proxy/ –http-fastcgi-temp-path=/var/tmp/nginx/fcgi/ –http-uwsgi-temp-path=/var/tmp/nginx/uwsgi –http-scgi-temp-path=/var/tmp/nginx/scgi –with-pcremakemake install/usr/local/nginx/bin/nginx –t./nginx: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory 解决prce的问题 #find / -name libpcre.so*/usr/local/lib/libpcre.so.1.0.1/usr/local/lib/libpcre.so/usr/local/lib/libpcre.so.1/lib64/libpcre.so.0.0.1/lib64/libpcre.so.0出现了这么多结果。我们安装的PCRE库的位置在/usr/local/pcre中,我们就用这个位置vim /etc/ld.so.conf在尾行加入/usr/local/binroot@mail2 bin]# ldconfig#/usr/local/nginx/bin/nginx -tnginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is oknginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful 这就正常了启动nginx/usr/local/nginx/bin/nginxvim /usr/local/nginx/conf/nginx.conf在最后面的大括号前面添加一行include /usr/local/nginx/conf.d/*.conf;建立这个目录mkdir /usr/local/nginx/conf.dvim /usr/local/nginx/conf.d/lkq.confupstream backend{server 192.168.236.150:80 weight=1;server 192.168.236.151:80 weight=2;#ip_hash;}server{listen 80;server_name www.lkq.com;location ~ ^/*{proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_buffering off;proxy_pass http://backend;}}haproxy+nginx实现高可用负载均衡Keepalived 的作用是检测web服务器的状态,如果有一台web服务器死机,或工作出现故障,Keepalived将检测到,并将有故障的web服务器从系统中剔除, 当web服务器工作正常后Keepalived自动将web服务器加入到服务器群中,这些工作全部自动完成,不需要人工干涉,需要人工做的只是修复故障的 web服务器。HAProxy 提供高可用性、负载均衡以及基于 TCP 和 HTTP 应用的代理,支持虚拟主机,它是免费、快速并且可靠的一种解决方案。HAProxy 特别适用于那些负载特大的 web 站点, 这些站点通常又需要会话保持或七层处理。HAProxy 运行在当前的硬件上,完全可以支持数以万计的并发连接。并且它的运行模式使得它可以很简单安全的整 合进您当前的架构中, 同时可以保护你的 web 服务器不被暴露到网络上。系统环境: CenOS 6.5x86_64 Desktop install 将selinux and iptables 设置为disabled图1 为基本的架构图:图2 为IP地址分配。主要用途IPHaproxy+keepalived_master192.168.236.143Haproxy+keepalived_backup192.168.236.192Webser1192.168.236.150Webser2192.168.236.151一:安装过程,在两台HA机器上分别keepalived:#ln -s /usr/src/kernels/2.6.18-128.el5-i686/ /usr/src/linux http://www.keepalived.org/software/ keepalived 的下载地址。版本的话自己可以选择一下版本。楼主选择的版本是1.2.23的版本 [root@mail2 keepalived-1.2.23]# ./configure –sysconf=/etc [root@mail2 keepalived-1.2.23]# make && make install [root@mail2 keepalived-1.2.23]# ln –s /usr/local/sbin/keepalived /sbin [
[email protected]
]#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak [
[email protected]
]# ln -s /etc/init.d/keepalived /etc/rc.d/rc3.d/S99keepalived [
[email protected]
]# ln -s /etc/init.d/keepalived /etc/rc.d/rc5.d/S99keepalived二、修改配置文件Director server 1 的配置文件[root@Lserver-1 keepalived]# cat keepalived.confMaster :! Configuration File for keepalived vrrp_script chk_http_port { script "/etc/keepalived/check_haproxy.sh" ######设置了一个keepalived一个脚本 interval 2 weight 2 global_defs { router_id LVS_DEVEL } vrrp_instance VI_1 { state MASTER ###keepalived 的主 interface eth0 virtual_router_id 51 priority 150 #####keepalived 一个ID 号 备的ID一定要小于主的ID advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_http_port } virtual_ipaddress { 192.168.236.230 ######一个VIP地址 } } }BACKUP:! Configuration File for keepalived vrrp_script chk_http_port { script "/etc/keepalived/check_haproxy.sh" #####一个keepalived的脚本同主一样 interval 2 weight 2 global_defs { router_id LVS_DEVEL } vrrp_instance VI_1 { state BACKUP ####为keepalived 的backup 备节点 interface eth0 virtual_router_id 51 priority 120 ####### keepalived 的一个ID号,一定要小于主节点 advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_http_port } virtual_ipaddress { 192.168.236.230 ##同主 } } }三、Master 主机上:####这里是一个控制haproxy的一个启动脚本 #vi /etc/keepalived/check_haproxy.sh #!/bin/bash A=`ps -C haproxy --no-header | wc -l` if [ $A -eq 0 ];then /usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/conf/haproxy.cfg echo "haproxy start" sleep 3 if [ `ps -C haproxy --no-header | wc -l`-eq 0 ];then /etc/init.d/keepalived stop echo "keepalived stop" fi fiBackup 备机上:#!/bin/bash A=`ip a | grep 192.168.236.230 | wc -l` B=`ps -ef | grep haproxy | grep -v grep| awk '{print $2}'` if [ $A -gt 0 ];then /usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/conf/haproxy.cfg else kill -9 $B fi#两台机器分别执行:chmod 755 /etc/keepalived/check_haproxy.sh四、haproxy的安装(主备都一样):yum -y install pcre pcre-devel wget https://fossies.org/linux/misc/haproxy-1.7.5.tar.gz tar xf haproxy-1.7.5.tar.gz cd haproxy-1.7.5 make TARGET=linux26 ARCH=x86_64 PREFIX=/usr/local/haproxy USE_PCRE=1 make install PREFIX=/usr/local/haproxy #cd/usr/local/haproxy/ #mkdir conf #mkdir logs#vi haproxy.cfgglobal log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 # chroot /usr/share/haproxy chroot /usr/local/haproxy uid 99 gid 99 daemon #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 #redispatch maxconn 2000 option redispatch stats uri /haproxy stats auth admin:admin frontend www bind *:80 acl web hdr(host) -i www.lkq.com ####这里是通过域名访问的。如果域名为这个则通过。 use_backend webserver if web backend webserver #webserver作用域 mode http balance roundrobin option httpchk /index.html server s1 192.168.236.151:80 weight 3 check ###这两条记录为后端的两台WEB服务器 server s2 192.168.236.150:80 weight 3 check五、:先主后从,两台机器上都分别启动:/etc/init.d/keepalivedstart (如果之前没有启动haproxy,这条命令会自动把haproxy启动)[root@Rserver-1 conf]# ps -ef |grep haproxy nobody 14766 1 0 19:13 ? 00:00:01 /usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/conf/haproxy.cfg root 16034 8237 0 19:56 pts/2 00:00:00 grep haproxy [root@Rserver-1 conf]# ps -ef |grep keepalived root 16016 1 0 19:56 ? 00:00:00 keepalived -D root 16018 16016 0 19:56 ? 00:00:00 keepalived -D root 16019 16016 0 19:56 ? 00:00:00 keepalived -D root 16102 8237 0 19:56 pts/2 00:00:00 grep keepalived [root@Rserver-1 conf]#六、再两台HA上分别执行ip addr list |grep 192.168.23master:[root@Rserver-1 conf]# ip addr list |grep 192.168.236 inet 192.168.236.143/24 brd 192.168.236.255 scope global eth0 inet 192.168.236.230/32 scope global eth0 [root@Rserver-1 conf]#Backup:[root@Lserver-1 keepalived]# ip addr list |grep 192.168.236 inet 192.168.236.192/24 brd 192.168.236.255 scope global eth0 [root@Lserver-1 keepalived]#七、停掉主上的haproxy,3秒后keepalived会自动将其再次启动[root@Rserver-1 conf]# killall haproxy [root@Rserver-1 conf]# ps -ef |grep haproxy nobody 14766 1 0 19:13 ? 00:00:02 /usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/conf/haproxy.cfg root 16826 8237 0 20:01 pts/2 00:00:00 grep haproxy八、停掉主的keepalived,备机马上接管服务Master:[root@Rserver-1 conf]# /etc/init.d/keepalived stop [root@Rserver-1 conf]# ip addr list|grep 192.168.236 inet 192.168.236.143/24 brd 192.168.236.255 scope global eth0 [root@Rserver-1 conf]#Backup:[root@Lserver-1 keepalived]# ip addr list|grep 192.168.236 inet 192.168.236.192/24 brd 192.168.236.255 scope global eth0 inet 192.168.236.230/32 scope global eth0 [root@Lserver-1 keepalived]#
2019年05月11日
12,108 阅读
0 评论
14 点赞
2019-04-10
mysql优化之thread_cache_size
1、mysql服务器的线程数查看方法:show global status like 'Thread%';Threads_created:创建过的线程数,如果发现Threads_created值过大的话,表明MySQL服务器一直在创建线程,这也是比较耗资源,可以适当增加配置文件中thread_cache_size值2、优化参数thread_cache_sizethread_cache_size:当客户端断开之后,服务器处理此客户的线程将会缓存起来以响应下一个客户而不是销毁(前提是缓存数未达上限)即可以重新利用保存在缓存中线程的数量,当断开连接时如果缓存中还有空间,那么客户端的线程将被放到缓存中,如果线程重新被请求,那么请求将从缓存中读取,如果缓存中是空的或者是新的请求,那么这个线程将被重新创建,如果有很多新的线程,增加这个值可以改善系统性能。thread_cache_size大小的设置:如果是短连接,适当设置大一点,因为短连接往往需要不停创建,不停销毁,如果大一点,连接线程都处于取用状态,不需要重新创建和销毁,所以对性能肯定是比较大的提升。对于长连接,不能保证连接的稳定性,所以设置这参数还是有一定必要,可能连接池的问题,会导致连接数据库的不稳定性,也会出现频繁的创建和销毁,但这个情况比较少,如果是长连接,可以设置成小一点,一般在50-100左右。物理内存设置规则:通过比较Connections 和 Threads_created 状态的变量,可以看到这个变量的作用。(-->表示要调整的值) 根据物理内存设置规则如下: 1G ---> 8 2G ---> 16 3G ---> 32 >3G ---> 64查询thread_cache_size设置show global status like'thread_cache_size';优化方法:1、mysql> set global thread_cache_size=162、编辑/etc/my.cnf 更改/添加thread_concurrency = 163、mysql kill线程mysqladmin start slave stop slave kill某个连接到mysqlServer的线程
2019年04月10日
6,000 阅读
0 评论
49 点赞
2019-03-29
解决 docker run 报错 oci runtime error
在部署新服务器运行docker镜像的时候遇到了报错,记录下解决方法。docker 启动容器报错:Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:258: applying cgroup configuration for process caused \"Cannot set property TasksAccountingdocker 是通过 yum install docker安装的,搜了一把,原来是因为linux与docker版本的兼容性问题。那就卸载旧版本安装最新版试试。0.通过uname -r命令查看你当前的内核版本uname -r1.使用 root 权限登录 Centos。确保 yum 包更新到最新。sudo yum update2.卸载旧版本(如果安装过旧版本的话)sudo yum remove docker docker-common docker-selinux docker-engine3.安装需要的软件包, yum-util 提供yum-config-manager功能,另外两个是devicemapper驱动依赖的sudo yum install -y yum-utils device-mapper-persistent-data lvm24.设置yum源sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo5.可以查看所有仓库中所有docker版本,并选择特定版本安装yum list docker-ce --showduplicates | sort -r6.安装dockersudo yum install docker-ce7.启动并加入开机启动sudo systemctl start dockersudo systemctl enable docker8.验证安装是否成功(有client和service两部分表示docker安装启动都成功了) docker version 经过以上一通操作,pull 一下镜像再执行docker run命令,问题解决。
2019年03月29日
49,385 阅读
0 评论
951 点赞
2019-03-11
内部类-非静态内部类|静态内部类|匿名内部类
内部类-非静态内部类|静态内部类|匿名内部类package com.weitip.oop; /** * Created by IntelliJ IDEA. * User: loong * Date: 2019/3/11 * Time: 15:35 * Description: com.weitip.oop **/ public class InnerClass { public static void main(String[] args) { Outer.Inner inner = new Outer().new Inner(); inner.show(); Outer.Inner2 inner2 = new Outer.Inner2(); inner2.show(); } } class Outer{ private int age = 20; class Inner{ int age = 999; public void show(){ int age = 999999; System.out.println("外部类属性age:" + Outer.this.age); System.out.println("内部类属性age:" + this.age); System.out.println("局部变量age:" + age); } } static class Inner2{ public static void show() { System.out.println("内部静态类被调用"); } } }package com.weitip.oop; /** * Created by IntelliJ IDEA. * User: loong * Date: 2019/3/11 * Time: 16:01 * Description: com.weitip.oop **/ public class AnonymousInnerClass { public static void test(AA a){ a.aa(); } public static void main(String[] args) { AnonymousInnerClass.test(new AA(){ @Override public void aa() { System.out.println("匿名内部类被调用"); } }); } } interface AA{ void aa(); }
2019年03月11日
1,950 阅读
0 评论
0 点赞
2019-03-02
Beyond Compare 4 代码对比工具
提取码:bh69
2019年03月02日
4,730 阅读
0 评论
4 点赞
1
...
32
33
34
...
37