linux安装MySQL报 error while loading shared libraries: libtinfo.so.5 解决办法

MySQL 我采用的是 Linux- Generic 包安装,其中详细略过不表。一顿操作之后,终于到将 mysql 服务启动。但是到了连接服务的时候却报错了。

mysql: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory

解决办法:
sudo ln -s /usr/lib64/libtinfo.so.6.1 /usr/lib64/libtinfo.so.5

安装准备
安装依赖组件:

gcc openssl-devel pcre-devel zlib-devel

,--这个能执行就执行
命令:

yum install gcc openssl-devel pcre-devel zlib-devel -y

tengine下载地址
http://tengine.taobao.org/

安装位置
一般为/usr/local下面,新建 tengine文件夹

编译
解压上传的tengine安装包,因为nginx是基于C语言开发的,所以首先要将解压后的源码编译,进入到tengine目录,使用./configure --prefix=/安装目录命令编译,如下图显示,在tengine里面执行
默认

./configure --prefix=/usr/local/tengine

安装
使用make && make install命令,对源码进行安装

配置Tengine服务启动脚本
添加Tengine启动脚本文件nginx到/etc/init.d/这个文件目录下,文件内容如下

#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig:   - 85 15 
# description:  Nginx is an HTTP(S) server, HTTP(S) reverse \
#               proxy and IMAP/POP3 proxy server
# processname: nginx
# config:      /etc/nginx/nginx.conf
# config:      /etc/sysconfig/nginx
# pidfile:     /var/run/nginx.pid
 
# Source function library.
. /etc/rc.d/init.d/functions
 
# Source networking configuration.
. /etc/sysconfig/network
 
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
 # 注意:这里需要修改为自己的tengine安装牡蛎
nginx="/usr/local/tengine/sbin/nginx"
prog=$(basename $nginx)
 
NGINX_CONF_FILE="/usr/local/tengine/conf/nginx.conf"
 
[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx
 
lockfile=/var/lock/subsys/nginx
 
make_dirs() {
   # make required directories
   user=`nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -`
   options=`$nginx -V 2>&1 | grep 'configure arguments:'`
   for opt in $options; do
       if [ `echo $opt | grep '.*-temp-path'` ]; then
           value=`echo $opt | cut -d "=" -f 2`
           if [ ! -d "$value" ]; then
               # echo "creating" $value
               mkdir -p $value && chown -R $user $value
           fi
       fi
   done
}
 
start() {
    [ -x $nginx ] || exit 5
    [ -f $NGINX_CONF_FILE ] || exit 6
    make_dirs
    echo -n $"Starting $prog: "
    daemon $nginx -c $NGINX_CONF_FILE
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}
 
stop() {
    echo -n $"Stopping $prog: "
    killproc $prog -QUIT
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}
 
restart() {
    configtest || return $?
    stop
    sleep 1
    start
}
 
reload() {
    configtest || return $?
    echo -n $"Reloading $prog: "
    killproc $nginx -HUP
    RETVAL=$?
    echo
}
 
force_reload() {
    restart
}
 
configtest() {
  $nginx -t -c $NGINX_CONF_FILE
}
 
rh_status() {
    status $prog
}
 
rh_status_q() {
    rh_status >/dev/null 2>&1
}
 
case "$1" in
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart|configtest)
        $1
        ;;
    reload)
        rh_status_q || exit 7
        $1
        ;;
    force-reload)
        force_reload
        ;;
    status)
        rh_status
        ;;
    condrestart|try-restart)
        rh_status_q || exit 0
            ;;
    *)
        echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
        exit 2
esac

修改启动权限
chmod 777 nginx
启动服务
service nginx start启动服务
service nginx stop停止
service nginx status状态
service nginx reload动态重载配置文件

在转换一个JSON转Java对象是 idea 编译不通过 提示:Error:(24, 35) java: 常量字符串过长

File -> Settings -> Build,Execution,Deployment -> Compiler -> Java
Compiler , Use Compiler, 选择Eclipse , 点击 Apply

就行了

方法2
常量字符串过长的处理办法
背景:准备分析一个长字符串。
一、先把这段文本复制下来,然后赋给变量str

String str = 复制的文本;
String[] parts = str.split(",");
System.out.println(parts.length);
执行,报错了:

常量字符串过长
二、原来常量字符串的长度不能长于65535 - 1字节;

而我的文本长度达到了10W+字节,怎么办呢?

StringBuilder sb = new StringBuilder();
sb.append(文本的一半);
sb.append(文本剩下的一半);
String str = sb.toString();
String[] parts = str.split(",");
System.out.println(parts.length);
再执行,顺利通过。

上善若水,水利万物而不争。

    public class Main {

    private static int THREAD_COUNT = 10;
    private static int ITEM_COUNT = 1000;

    private static ConcurrentHashMap<String, Long> getData(int count) {

return  LongStream.rangeClosed(1, count)
        .boxed().collect(Collectors.toConcurrentMap(i-> UUID.randomUUID().toString(), Function.identity(),(t1, t2)->t1,ConcurrentHashMap::new));
    }

    public static void main(String[] args) throws InterruptedException {
        ConcurrentHashMap<String, Long> stringLongConcurrentHashMap = getData(ITEM_COUNT-100);
        System.out.println("init siez:"+stringLongConcurrentHashMap.size());
        ForkJoinPool forkJoinPool = new ForkJoinPool(THREAD_COUNT);
        forkJoinPool.execute(()-> IntStream.rangeClosed(1,10).parallel().forEach(i->{
               

                int gap=ITEM_COUNT-stringLongConcurrentHashMap.size();
                System.out.println("gap siez:"+gap);

                stringLongConcurrentHashMap.putAll(getData(gap));

               
        }));
        forkJoinPool.shutdown();
        forkJoinPool.awaitTermination(1, TimeUnit.HOURS);
        System.out.println("finish siez:"+stringLongConcurrentHashMap.size());



    }
}

上面没添加锁输出以下结果

init siez:900
gap siez:100
gap siez:100
gap siez:100
gap siez:100
gap siez:100
gap siez:100
gap siez:100
gap siez:100
gap siez:100
gap siez:100
finish siez:1900

添加锁之后

public class Main {

    private static int THREAD_COUNT = 10;
    private static int ITEM_COUNT = 1000;

    private static ConcurrentHashMap<String, Long> getData(int count) {

return  LongStream.rangeClosed(1, count)
        .boxed().collect(Collectors.toConcurrentMap(i-> UUID.randomUUID().toString(), Function.identity(),(t1, t2)->t1,ConcurrentHashMap::new));
    }

    public static void main(String[] args) throws InterruptedException {
        ConcurrentHashMap<String, Long> stringLongConcurrentHashMap = getData(ITEM_COUNT-100);
        System.out.println("init siez:"+stringLongConcurrentHashMap.size());
        ForkJoinPool forkJoinPool = new ForkJoinPool(THREAD_COUNT);
        forkJoinPool.execute(()-> IntStream.rangeClosed(1,10).parallel().forEach(i->{
               synchronized (stringLongConcurrentHashMap){

                int gap=ITEM_COUNT-stringLongConcurrentHashMap.size();
                System.out.println("gap siez:"+gap);

                stringLongConcurrentHashMap.putAll(getData(gap));

               }
        }));
        forkJoinPool.shutdown();
        forkJoinPool.awaitTermination(1, TimeUnit.HOURS);
        System.out.println("finish siez:"+stringLongConcurrentHashMap.size());



    }
}

输出结果
init siez:900
gap siez:100
gap siez:0
gap siez:0
gap siez:0
gap siez:0
gap siez:0
gap siez:0
gap siez:0
gap siez:0
gap siez:0
finish siez:1000