Redis学习笔记

Redis学习笔记

本笔记是学习完“狂神说Java”视频之后总结的,视频地址:https://www.bilibili.com/video/BV1S54y1R7SB

学习资源

Redis概述

Redis(==R==emote ==D==ictionary ==S==erver),即远程字典服务。

  • Redis是一个开源的使用ANSI C语言编写、支持网络、可基于内存亦可持久化的日志型、Key-Value数据库,并提供多种语言的API。从2010年3月15日起,Redis的开发工作由VMware主持。从2013年5月开始,Redis的开发由Pivotal赞助。

  • Redis是 NoSQL技术阵营中的一员,它通过多种键值数据类型来适应不同场景下的存储需求,借助一些高层级的接口使用其可以胜任,如缓存、队列系统的不同角色。

Redis能干嘛?

  1. 内存存储,持久化,数据存储在内存中断电即失,所以持久化很重要(rdb、aof)
  2. 效率高,可以用于高速缓存
  3. 发布订阅系统
  4. 地图信息分析
  5. 计时器、计数器(浏览量)
  6. ……

特性?

  1. 多样的数据类型
  2. 持久化
  3. 集群
  4. 事务

Redis安装

Docker上安装Redis

安装命令:

1
$ docker run -d -p 6379:6379 -v ~/var/lib/redis/conf/redis.conf:/usr/local/etc/redis/redis.conf --name redis redis redis-server

将Redis的配置文件挂载在 ~/docker_workspace/data/redis/conf/redis.conf 目录下。

Centos7上安装Redis

一键安装脚本

1
vi redis-install.sh

redis-install.sh文件中写入以下内容,写完后保存退出:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
yum -y update
yum -y install gcc gcc-c++
yum -y install centos-release-scl
yum -y install devtoolset-9-gcc devtoolset-9-gcc-c++ devtoolset-9-binutils
echo "source /opt/rh/devtoolset-9/enable" >>/etc/profile
source /etc/profile
cd /opt
yum -y install wget
wget http://download.redis.io/releases/redis-6.0.4.tar.gz
tar xzf redis-6.0.4.tar.gz
cd redis-6.0.4
make && make install
mkdir /etc/redis
cp /opt/redis-6.0.4/redis.conf /etc/redis/redis.conf
echo "Success install redis"
echo "redis config file path is /etc/redis/redis.conf"

执行安装脚本安装redis

1
2
chmod a+x redis-install.sh    # 赋予安装脚本可执行权限
sh redis-install.sh # 执行安装脚本

详细安装步骤

  1. 环境准备
1
2
yum update
yum -y install gcc gcc-c++ # 安装C和C++的依赖文件

image.png

上图报错是由于centos7默认安装的gcc版本是4.8.5,而编译redis需要的gcc的版本要在5.3以上。所以需要更新gcc。方法如下:

1
2
3
4
5
6
7
gcc -v     # 查看系统使用的gcc的版本
yum -y install centos-release-scl
yum -y install devtoolset-9-gcc devtoolset-9-gcc-c++ devtoolset-9-binutils
scl enable devtoolset-9 bash # scl命令启用只是临时的,退出shell或重启就会恢复原系统gcc版本
# 使scl命令在每次登录时生效
echo "source /opt/rh/devtoolset-9/enable" >>/etc/profile
source /etc/profile
  1. 下载redis的安装包,程序一般放在 /opt 目录下,并解压
1
2
3
4
cd /opt
wget http://download.redis.io/releases/redis-6.0.4.tar.gz
tar xzf redis-6.0.4.tar.gz
cd /opt/redis-6.0.4

image.png

  1. 编译安装环境
1
make

image.png

如果编译失败,则使用 make clean 命令来清除已经安装的文件。

  1. 安装redis
1
make install

image.png

redis的默认安装路径: /usr/local/bin

image.png

  1. 启动redis

redis默认不是后台启动的,需要修改配置文件。

1
2
3
4
mkdir ~/jconfig
cp /opt/redis-6.0.4/redis.conf ~/jconfig/redis.conf # 移动redis配置文件
yum install vim
vim ~/jconfig/redis.conf # 修改redis.conf文件

修改redis.conf配置文件

以指定配置文件启动redis

1
redis-server ~/jconfig/redis.conf
  1. 查看redis进程是否开启
1
ps -ef |grep redis
  1. 测试redis连接
1
redis-cli
  1. 关闭redis服务
1
2
127.0.0.1:6379> SHUTDOWN
not connected> exit

Redis可视化连接工具

Jetbrains插件:Iedis/Iedis2

目前测试Iedis插件已经无法使用,Iedis2插件需要付费,有30天的体验期。

开源工具:redis-manager

Github地址:https://github.com/ngbdf/redis-manager

DockerHub地址:https://hub.docker.com/r/reasonduan/redis-manager

Docker方式快速启动:

  1. 在数据库创建表
1
CREATE DATABASE `redis_manager` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;
  1. 通过docker启动redis-manager
1
2
3
4
5
6
$ docker run  --net=host --name redis-manager  \
-e DATASOURCE_DATABASE='redis_manager' \
-e DATASOURCE_URL='jdbc:mysql://127.0.0.1:3306/redis_manager?useUnicode=true&characterEncoding=utf-8&serverTimezone=GMT%2b8' \
-e DATASOURCE_USERNAME='root' \
-e DATASOURCE_PASSWORD='123456' \
reasonduan/redis-manager

由于--net=host在Mac和Windows上不生效,所以目前只能在Linux平台上使用。

Redis性能测试

redis-benchmark是一个官方自带的性能测试工具。

例子:测试100个并发连接 10000个请求

1
redis-benchmark -h localhost -p 6379 -c 100 -n 10000

Redis相关问题

为什么Redis的端口号是6379?(了解即可) –> 粉丝效应

参考:https://blog.csdn.net/weixin_42075590/article/details/80748128

Redis是单线程的。Redis是很快的,官方表示,Redis是基于内存操作,CPU不是Redis的瓶颈,Redis的瓶颈是根据及其的内存和网络带宽,既然可以使用单线程来实现就没必要使用多线程。

Redis是C语言写的,官方提供的数据为100000+的QPS,完全不比同样使用key-value的Memecache差!

Redis为什么单线程还这么快?

  1. 误区1: 高性能的服务器不一定是多线程的
  2. 误区2: 多线程(CPU会上下文切换)一定比单线程效率高!

核心:Redis是将所有的数据全部存放在内存中,所以说使用单线程去操作效率就是最高的,多线程(CPU上下文切换是一个耗时的操作),对于内存系统来说,没有上下文切换的时候效率是最高的!多次读写都是在一个CPU上的,在内存情况下,这时一个最佳的方案。

Redis基础知识

redis默认有16个数据库,默认使用的是第0号数据库。

可以使用select来切换数据库

1
2
3
4
127.0.0.1:6379> select 3   # 切换到第3号数据库
OK
127.0.0.1:6379[3]> dbsize # 查看数据库的大小
(integer) 0

使用set key value来设置值

1
2
3
4
5
6
7
8
9
10
11
12
13
14
127.0.0.1:6379[3]> set name finlu  # 设置key-value键值对
OK
127.0.0.1:6379[3]> dbsize
(integer) 1
127.0.0.1:6379[3]> select 7
OK
127.0.0.1:6379[7]> dbsize
(integer) 0
127.0.0.1:6379[7]> get name # 获取name对应的值
(nil)
127.0.0.1:6379[7]> select 3
OK
127.0.0.1:6379[3]> get name
"finlu"

使用keys来查询当前数据库中的key

flushdb 用来清空当前数据库, FLUSHALL 清空所有的数据库

1
2
3
4
5
6
127.0.0.1:6379[3]> keys *   # 查看当前数据库下的所有key
1) "name"
127.0.0.1:6379[3]> flushdb # 清空数据库
OK
127.0.0.1:6379[3]> keys *
(empty array)

EXISTS 判断键是否存在

move 将key移动,后面跟着的是数据库的名称

1
2
3
4
5
6
7
8
9
10
11
12
127.0.0.1:6379[7]> set name finlu
OK
127.0.0.1:6379[7]> keys *
1) "name"
127.0.0.1:6379[7]> EXISTS name
(integer) 1
127.0.0.1:6379[7]> EXISTS name1
(integer) 0
127.0.0.1:6379[7]> move name 1
(integer) 1
127.0.0.1:6379[7]> ping # 查看是否连接成功
PONG

Redis数据类型

官网的基本介绍

翻译:

Redis 是一个开源(BSD许可)的,内存中的数据结构存储系统,它可以用作数据库、缓存和消息中间件。 它支持多种类型的数据结构,如 字符串(strings), 散列(hashes), 列表(lists), 集合(sets), 有序集合(sorted sets) 与范围查询, bitmaps, hyperloglogs 和 地理空间(geospatial) 索引半径查询。 Redis 内置了 复制(replication),LUA脚本(Lua scripting), LRU驱动事件(LRU eviction),事务(transactions) 和不同级别的 磁盘持久化(persistence), 并通过 Redis哨兵(Sentinel)和自动分区(Cluster)提供高可用性(high availability)。

Redis基本数据类型

Redis-key

get key名称:获取当前key对应的value值

EXPIRE key名称 秒数:设置key的过期时间

ttl key名称:查看当前key生效剩余的秒数,值为-2的时候说明过期了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
127.0.0.1:6379[7]> keys *
1) "name"
2) "age"
127.0.0.1:6379[7]> get name
"finlu"
127.0.0.1:6379[7]> EXPIRE name 10 # 设置过期时间
(integer) 1
127.0.0.1:6379[7]> ttl name # 查看key的过期时间
(integer) 7
127.0.0.1:6379[7]> ttl name
(integer) 6
127.0.0.1:6379[7]> ttl name
(integer) 5
127.0.0.1:6379[7]> ttl name
(integer) 4
127.0.0.1:6379[7]> ttl name
(integer) 2
127.0.0.1:6379[7]> ttl name
(integer) 1
127.0.0.1:6379[7]> ttl name
(integer) -2 # key已经过期
127.0.0.1:6379[7]> type age # 查看当前key的类型
string

String(字符串)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
127.0.0.1:6379[7]> keys *
1) "key1"
127.0.0.1:6379[7]> get key1
"v1"
127.0.0.1:6379[7]> APPEND key1 2 # 往key1中追加字符串2,如果key1不存在,就相当于setkey(会创建一个名叫key1的key)
(integer) 3
127.0.0.1:6379[7]> get key1
"v12"
127.0.0.1:6379[7]> STRLEN key1 # 查看key1对应的value的长度
(integer) 3
127.0.0.1:6379[7]> APPEND key1 "|redis"
(integer) 9
127.0.0.1:6379[7]> STRLEN key1
(integer) 9
127.0.0.1:6379[7]> set views 0
OK
127.0.0.1:6379[7]> get views
"0"
127.0.0.1:6379[7]> INCR views # views的值自增1
(integer) 1
127.0.0.1:6379[7]> INCR views
(integer) 2
127.0.0.1:6379[7]> get views
"2"
127.0.0.1:6379[7]> DECR views # views的值自减1
(integer) 1
127.0.0.1:6379[7]> DECR views
(integer) 0
127.0.0.1:6379[7]> DECR views
(integer) -1
127.0.0.1:6379[7]> INCRBY views 11 # views的值增加11,相当于步长
(integer) 10
127.0.0.1:6379[7]> INCRBY views 10
(integer) 20
127.0.0.1:6379[7]> DECRBY views 10 # views的值减小11
(integer) 10
127.0.0.1:6379[7]> DECRBY views 5
(integer) 5

字符串范围

1
2
3
4
5
6
7
8
127.0.0.1:6379[7]> set key1 "hello,finlu"
OK
127.0.0.1:6379[7]> get key1
"hello,finlu"
127.0.0.1:6379[7]> GETRANGE key1 0 3 # 获取范围内的字符串:[0,3]
"hell"
127.0.0.1:6379[7]> GETRANGE key1 0 -1 # 获取全部的字符串,-1表示到字符串的末尾
"hello,finlu"

替换指定位置的字符串

1
2
3
4
5
6
7
8
9
10
11
12
127.0.0.1:6379[7]> set key2 abcdefg
OK
127.0.0.1:6379[7]> get key2
"abcdefg"
127.0.0.1:6379[7]> SETRANGE key2 1 xx # 将位置1处的字符串替换为xx,替换的字符数量和你要替换的字符串的长度相关
(integer) 7
127.0.0.1:6379[7]> get key2
"axxdefg"
127.0.0.1:6379[7]> SETRANGE key2 6 yyyyy
(integer) 11
127.0.0.1:6379[7]> get key2 # 如果替换的字符串长度加上offset比原字符串长度大,则将溢出的部分加到原来字符串的后面
"axxdefyyyyy"

SETEX(set with expire) # 设置过期时间

SETNX(set if not exist) 不存在再设置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
127.0.0.1:6379[7]> SETEX key3 30 "finlu"   # 设置带过期时间的key
OK
127.0.0.1:6379[7]> get key3
"finlu"
127.0.0.1:6379[7]> ttl key3
(integer) 24
127.0.0.1:6379[7]> SETNX key4 hello # key不存在,设置成功,返回值为1
(integer) 1
127.0.0.1:6379[7]> keys *
1) "key3"
2) "key4"
127.0.0.1:6379[7]> setnx key4 "Redis" # key已经存在,设置失败,返回值为0
(integer) 0
127.0.0.1:6379[7]> get key4
"hello"
127.0.0.1:6379[7]> ttl key3
(integer) -2
127.0.0.1:6379[7]> keys *
1) "key4"

MSET:批量设置key-value的值

MGET:批量获取key-value的值

MSETNX: 批量设置key-value的值,如果key存在则设置成功,如果key不存在则失败。原子性操作,要么一起成功要么一起失败。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
127.0.0.1:6379[7]> keys *
(empty array)
127.0.0.1:6379[7]> mset key1 v1 key2 v2 key3 v3 # 同时设置多个值
OK
127.0.0.1:6379[7]> keys *
1) "key3"
2) "key1"
3) "key2"
127.0.0.1:6379[7]> mget key1 key2 key3 # 同时获取多个值
1) "v1"
2) "v2"
3) "v3"
127.0.0.1:6379[7]> msetnx key1 v1 key4 v4 # msetnx 是一个原子性操作,要么一起成功,要么一起失败
(integer) 0
127.0.0.1:6379[7]> get key4
(nil)

set user:1 {name: zhangsan, age: 3} 设置一个user:1对象 值为json字符来保存一个对象

这里的key是一个巧妙的设计:user:{id}:{field}

使用mset配合key的巧妙设计可以巧妙地设置对象书写

1
2
3
4
5
127.0.0.1:6379[7]> mset user:1:name zhangsan user:1:age 2
OK
127.0.0.1:6379[7]> mget user:1:name user:1:age
1) "zhangsan"
2) "2"

getset 先get然后set 是一个组合命令

1
2
3
4
5
6
7
8
127.0.0.1:6379[7]> GETSET db redis   # 如果不存在值,则返回nil
(nil)
127.0.0.1:6379[7]> get db
"redis"
127.0.0.1:6379[7]> getset db mongodb # 如果存在值,获取原来的值并设置新的值
"redis"
127.0.0.1:6379[7]> get db
"mongodb"

String类型的使用场景:Value除了是字符串之外还可以是数字

  1. 计数器 incrby
  2. 统计多单位的数量(粉丝数) uid:3234:follow 0
  3. 点赞数
  4. 对象缓存存储

List

基本的数据类型。

在Redis里面可以把list来实现栈、队列、阻塞队列等特殊的数据结构。

所有list的命令都是以 L 开头的,Redis不区分大小写命令。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
127.0.0.1:6379[7]> keys *
(empty array)
127.0.0.1:6379[7]> LPUSH list one # 将一个值或者多个值插入到列表的头部
(integer) 1
127.0.0.1:6379[7]> LPUSH list two
(integer) 2
127.0.0.1:6379[7]> LPUSH list three
(integer) 3
127.0.0.1:6379[7]> LRANGE list 0 -1 # 获取列表中的所有值
1) "three"
2) "two"
3) "one"
127.0.0.1:6379[7]> LRANGE list 0 1 # 通过区间来获取具体的值
1) "three"
2) "two"
127.0.0.1:6379[7]> RPUSH list right # 将一个值放到列表的尾部
(integer) 4
127.0.0.1:6379[7]> LRANGE list 0 -1
1) "three"
2) "two"
3) "one"
4) "right"

LPOP:从头部移除一个值

RPOP:从尾部移除一个值

1
2
3
4
5
6
7
8
9
10
11
127.0.0.1:6379[7]> LPOP list
"three"
127.0.0.1:6379[7]> LRANGE list 0 -1 # 从头部移除一个值
1) "two"
2) "one"
3) "right"
127.0.0.1:6379[7]> RPOP list
"right"
127.0.0.1:6379[7]> LRANGE list 0 -1 # 从尾部移除一个值
1) "two"
2) "one"

通过下标来获取list的某一个值,不存在返回nil

1
2
3
4
5
6
127.0.0.1:6379[7]> LINDEX list 0
"two"
127.0.0.1:6379[7]> LINDEX list 1
"one"
127.0.0.1:6379[7]> LINDEX list 2
(nil)

LLEN:获取list的长度

1
2
3
4
5
6
7
8
9
10
11
12
127.0.0.1:6379[7]> flushdb
OK
127.0.0.1:6379[7]> LPUSH list one
(integer) 1
127.0.0.1:6379[7]> LPUSH list two
(integer) 2
127.0.0.1:6379[7]> LPUSH list three
(integer) 3
127.0.0.1:6379[7]> get list # 直接使用get命令获取list的值会报错
(error) WRONGTYPE Operation against a key holding the wrong kind of value
127.0.0.1:6379[7]> LLEN list
(integer) 3

LREM:移除指定的值

取关 uuid

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
127.0.0.1:6379[7]> LRANGE list 0 -1
1) "three"
2) "three"
3) "two"
4) "one"
127.0.0.1:6379[7]> lrem list 1 three # 移除list集合中指定个数的value,精确匹配
(integer) 1
127.0.0.1:6379[7]> lrem list 1 one
(integer) 1
127.0.0.1:6379[7]> LRANGE list 0 -1
1) "three"
2) "two"
127.0.0.1:6379[7]> LPUSH list three
(integer) 3
127.0.0.1:6379[7]> LREM list 2 three
(integer) 2
127.0.0.1:6379[7]> LRANGE list 0 -1
1) "two"

TRIM 修剪的操作 list 截断

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
127.0.0.1:6379[7]> keys *
(empty array)
127.0.0.1:6379[7]> RPUSH mylist hello
(integer) 1
127.0.0.1:6379[7]> RPUSH mylist hello1
(integer) 2
127.0.0.1:6379[7]> RPUSH mylist hello2
(integer) 3
127.0.0.1:6379[7]> RPUSH mylist hello3
(integer) 4
127.0.0.1:6379[7]> LTRIM mylist 1 2 # 通过下标截取指定的长度,这个list已经被改变了,只留下了被截取的元素
OK
127.0.0.1:6379[7]> lrange mylist 0 -1
1) "hello1"
2) "hello2"

RPOPLPUSH 移除列表的最后一个元素并且添加到一个新的列表中

1
2
3
4
5
6
7
8
9
10
11
12
13
127.0.0.1:6379[7]> RPUSH mylist hello
(integer) 1
127.0.0.1:6379[7]> RPUSH mylist hello1
(integer) 2
127.0.0.1:6379[7]> RPUSH mylist hello2
(integer) 3
127.0.0.1:6379[7]> RPOPLPUSH mylist myotherlist # 移除列表中的最后一个元素并且将它移动到新的列表中
"hello2"
127.0.0.1:6379[7]> LRANGE mylist 0 -1
1) "hello"
2) "hello1"
127.0.0.1:6379[7]> lrange myotherlist 0 -1
1) "hello2"

LSET:将列表中指定下标的值替换成另外一个值,相当于update操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
127.0.0.1:6379[7]> EXISTS list   # 判断列表是否存在
(integer) 0
127.0.0.1:6379[7]> lset list 0 item # 如果列表不存在会报错
(error) ERR no such key
127.0.0.1:6379[7]> LPUSH list value1
(integer) 1
127.0.0.1:6379[7]> LRANGE list 0 0
1) "value1"
127.0.0.1:6379[7]> LSET list 0 item # 如果存在,则更新当前下标的值
OK
127.0.0.1:6379[7]> LRANGE list 0 0
1) "item"
127.0.0.1:6379[7]> LSET list 1 item1 # 如果不存在则会报错
(error) ERR index out of range

LINSERT 将某个具体的value插入到列表中某个元素的前面或者后面

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
127.0.0.1:6379[7]> RPUSH list hello
(integer) 1
127.0.0.1:6379[7]> RPUSH list world
(integer) 2
127.0.0.1:6379[7]> LINSERT list before "world" "other"
(integer) 3
127.0.0.1:6379[7]> LRANGE list 0 -1
1) "hello"
2) "other"
3) "world"
127.0.0.1:6379[7]> LINSERT list after world !
(integer) 4
127.0.0.1:6379[7]> LRANGE list 0 -1
1) "hello"
2) "other"
3) "world"
4) "!"
127.0.0.1:6379[7]> LINSERT list1 after world ! # 如果列表不存在则会创建列表
(integer) 0

小结

  • 实际上list是一个链表,before Node after,left,right都可以插入值
  • 如果key不存在,则创建一个新的链表
  • 如果key存在,新增内容
  • 如果移除了key,则所有的value都没了,同时也代表不存在
  • 在两边插入或者改动值效率最高。中间元素效率会下降

LPUSH LPOP 栈

LPUSH RPOP 队列

Set(集合)

Set中的值不能重复

命令都是以 S 开头

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
127.0.0.1:6379[7]> SADD myset hello    # set集合中添加元素
(integer) 1
127.0.0.1:6379[7]> SADD myset finlu
(integer) 1
127.0.0.1:6379[7]> SADD myset world
(integer) 1
127.0.0.1:6379[7]> SMEMBERS myset # 查看指定set的所有值
1) "world"
2) "finlu"
3) "hello"
127.0.0.1:6379[7]> SISMEMBER myset hello # 判断某一个值是否在指定的集合中
(integer) 1
127.0.0.1:6379[7]> SISMEMBER myset hello1
(integer) 0
127.0.0.1:6379[7]> SCARD myset # 获取指定set的元素个数
(integer) 3
127.0.0.1:6379[7]> SREM myset hello # 移除set集合中的指定元素
(integer) 1
127.0.0.1:6379[7]> SMEMBERS myset
1) "world"
2) "finlu"
127.0.0.1:6379[7]> SCARD myset
(integer) 2
127.0.0.1:6379[7]> SRANDMEMBER myset # 随机抽选出一个元素
"finlu"
127.0.0.1:6379[7]> SRANDMEMBER myset
"finlu"
127.0.0.1:6379[7]> SRANDMEMBER myset
"finlu"
127.0.0.1:6379[7]> SRANDMEMBER myset
"world"
127.0.0.1:6379[7]> SRANDMEMBER myset 2 # 随机抽出指定个数的元素
1) "world"
2) "finlu"

删除指定的key,随机删除key

1
2
3
4
5
6
7
8
9
10
11
12
127.0.0.1:6379[7]> SMEMBERS myset
1) "finlu1"
2) "world"
3) "finlu2"
4) "finlu"
127.0.0.1:6379[7]> SPOP myset [count] # 随机删除集合中的一个元素,可以设置随机删除的数量
"world"
127.0.0.1:6379[7]> SPOP myset
"finlu2"
127.0.0.1:6379[7]> SMEMBERS myset
1) "finlu1"
2) "finlu"

SMOVE

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
127.0.0.1:6379[7]> SADD myset hello
(integer) 1
127.0.0.1:6379[7]> SADD myset finlu
(integer) 1
127.0.0.1:6379[7]> SADD myset finlu1
(integer) 1
127.0.0.1:6379[7]> SADD myset finlu2
(integer) 1
127.0.0.1:6379[7]> SADD myset finlu3
(integer) 1
127.0.0.1:6379[7]> SADD myset2 set2
(integer) 1
127.0.0.1:6379[7]> SMOVE myset myset2 hello # 将myset中的hello移动到myset2中
(integer) 1
127.0.0.1:6379[7]> SMEMBERS myset
1) "finlu2"
2) "finlu"
3) "finlu1"
4) "finlu3"
127.0.0.1:6379[7]> SMEMBERS myset2
1) "set2"
2) "hello"
127.0.0.1:6379[7]> SMOVE myset2 myset3 hello # 将myset2中的hello移动到myset3中,如果目标集合不存在,则会创建一个集合并将这个值放入到该集合中
(integer) 1
127.0.0.1:6379[7]> SMEMBERS myset3
1) "hello"

微博,B站,共同关注(并集)

集合之间的运算:

  • 差集
  • 交集
  • 并集

微博,A用户将所有关注的人放在一个set集合中!将它的粉丝放在一个集合中

共同关注,共同爱好,二度好友,好友推荐

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
127.0.0.1:6379[7]> SADD k1 a
(integer) 1
127.0.0.1:6379[7]> SADD k1 b
(integer) 1
127.0.0.1:6379[7]> SADD k1 c
(integer) 1
127.0.0.1:6379[7]> SADD k2 c
(integer) 1
127.0.0.1:6379[7]> SADD k2 d
(integer) 1
127.0.0.1:6379[7]> SADD k2 e
(integer) 1
127.0.0.1:6379[7]> SDIFF k1 k2 # 求k1-k2
1) "a"
2) "b"
127.0.0.1:6379[7]> SDIFF k2 k1 # 求k2-k1
1) "e"
2) "d"
127.0.0.1:6379[7]> SINTER k1 k2 # k1和k2的交集
1) "c"
127.0.0.1:6379[7]> SUNION k1 k2 # k1和k2的并集
1) "b"
2) "c"
3) "a"
4) "e"
5) "d"

Hash

map集合,key-map(key-<key, value>)集合。key对应的值是map集合

命令以h开头

hset

hget

hmset

hgetall

hdel

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
127.0.0.1:6379[7]> HSET myhash field1 finlu   # 设置一个具体的key-value
(integer) 1
127.0.0.1:6379[7]> HGET myhash field1 # 获取一个字段值
"finlu"
127.0.0.1:6379[7]> HMSET myhash field1 hello field2 world # 设置多个具体的key-value
OK
127.0.0.1:6379[7]> HMGET myhash field1 field2 # 获取多个字段值
1) "hello"
2) "world"
127.0.0.1:6379[7]> HGETALL myhash # 获取全部的数据
1) "field1"
2) "hello"
3) "field2"
4) "world"
127.0.0.1:6379[7]> HDEL myhash field1 # 删除一个key-value键值对
(integer) 1
127.0.0.1:6379[7]> HGETALL myhash
1) "field2"
2) "world"

HLEN

HEXISTS

1
2
3
4
5
6
7
8
9
10
11
127.0.0.1:6379[7]> HMSET myhsah field1 hello field2 world
OK
127.0.0.1:6379[7]> HGETALL myhash
1) "field2"
2) "world"
127.0.0.1:6379[7]> HLEN myhash # 获取hash的长度
(integer) 1
127.0.0.1:6379[7]> HEXISTS myhash field1 # 判断hash中的字段是否存在
(integer) 0
127.0.0.1:6379[7]> HEXISTS myhash field2
(integer) 1

只获得所有的key

只获得所有的值

1
2
3
4
127.0.0.1:6379[7]> HKEYS myhash
1) "field2"
127.0.0.1:6379[7]> HVALS myhash
1) "world"

INCRBY

1
2
3
4
5
6
7
8
9
10
11
12
127.0.0.1:6379[7]> HSET myhash field 1
(integer) 1
127.0.0.1:6379[7]> HSET myhash field3 1
(integer) 1
127.0.0.1:6379[7]> HINCRBY myhash field3 1
(integer) 2
127.0.0.1:6379[7]> HGET myhash field3
"2"
127.0.0.1:6379[7]> HINCRBY myhash field3 -1
(integer) 1
127.0.0.1:6379[7]> HGET myhash field3
"1"

HSETNX

1
2
3
4
127.0.0.1:6379[7]> HSETNX myhash field4 1  # 如果不存在则可以设置
(integer) 1
127.0.0.1:6379[7]> HSETNX myhash field2 1 # 如果存在则不可以设置
(integer) 0

应用场景:hash做变更的数据 尤其是用户信息之类的,经常变动的信息

hash更适合对象的存储,string更加适合字符串存储

Zset(有序集合)

在set的基础上,增加了一个值,set k1 v1 ==> zset k1 score1 v1

ZADD

ARANGE

1
2
3
4
5
6
7
8
9
10
11
127.0.0.1:6379[7]> zadd myset 1 one
(integer) 1
127.0.0.1:6379[7]> zadd myset 2 two
(integer) 1
127.0.0.1:6379[7]> zadd myset 3 three 4 four
(integer) 2
127.0.0.1:6379[7]> zrange myset 0 -1
1) "one"
2) "two"
3) "three"
4) "four"

ZRANGEBYSCORE key min max

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
127.0.0.1:6379[7]> zadd salary 2500 xiaohong  # 添加三个用户
(integer) 1
127.0.0.1:6379[7]> zadd salary 5000 zhangsan
(integer) 1
127.0.0.1:6379[7]> zadd salary 500 finlu
(integer) 1
127.0.0.1:6379[7]> ZRANGEBYSCORE salary -inf +inf # 显示全部的用户 从小到大排序
1) "finlu"
2) "xiaohong"
3) "zhangsan"
127.0.0.1:6379[7]> ZREVRANGE salary 0 -1 # 显示全部的用户 从大到小排序
1) "zhangsan"
2) "xiaohong"
3) "finlu"
127.0.0.1:6379[7]> ZRANGEBYSCORE salary -inf +inf WITHSCORES # 显示全部的用户 从小到大排序,同时输出score值
1) "finlu"
2) "500"
3) "xiaohong"
4) "2500"
5) "zhangsan"
6) "5000"
127.0.0.1:6379[7]> ZRANGEBYSCORE salary -inf 2500 WITHSCORES # 显示score小于2500的数据
1) "finlu"
2) "500"
3) "xiaohong"
4) "2500"

ZREM

1
2
3
4
5
6
7
8
9
10
11
12
13
127.0.0.1:6379[7]> zrange salary 0 -1
1) "finlu"
2) "xiaohong"
3) "zhangsan"
127.0.0.1:6379[7]> ZREM salary xiaohong # 移除有序集合中的指定元素
(integer) 1
127.0.0.1:6379[7]> zrange salary 0 -1
1) "finlu"
2) "zhangsan"
127.0.0.1:6379[7]> ZCARD salary # 查看salary的数量
(integer) 2
127.0.0.1:6379[7]> ZCOUNT salary 100 3000 # 获取指定区间的数量
(integer) 2

案例思路:set排序 存储班级成绩表,工作排序表等

消息通知:普通消息 1 重要消息 2 带权重进行判断

排行榜应用实现 取top N

Redis三种特殊的数据类型

geospatial地理位置

朋友的定位,附近的人,打车距离计算等。

Redis的Geo在Redis 3.2 就已经推出来了! 可以推算地理位置的信息,两地之间的距离,方圆几里的人。

只有六个命令:

GEOADD

中文文档:http://www.redis.cn/commands/geoadd.html

地理位置数据信息:http://www.jsons.cn/lngcode/

详细说明文档:https://www.redis.net.cn/order/3685.html

规则:两级无法直接添加,一般会下载数据,直接通过java程序导入。

添加城市数据:

参数: key 值(经度、纬度、名称) 官网有错误

1
2
3
4
5
6
7
8
127.0.0.1:6379[7]> geoadd china:city 116.40 39.90 beijing
(integer) 1
127.0.0.1:6379[7]> geoadd china:city 121.47 31.23 shanghai
(integer) 1
127.0.0.1:6379[7]> geoadd china:city 106.50 29.53 chongqing 114.05 22.52 shengzheng
(integer) 2
127.0.0.1:6379[7]> geoadd china:city 120.16 30.24 hangzhou 108.96 34.26 xian
(integer) 2

GEOPOS

获得当前定位:返回的是一个坐标值!

1
2
3
4
5
6
127.0.0.1:6379[7]> GEOPOS china:city beijing
1) 1) "116.39999896287918091"
2) "39.90000009167092543"
127.0.0.1:6379[7]> GEOPOS china:city chongqing
1) 1) "106.49999767541885376"
2) "29.52999957900659211"

GEODIST

两个位置之间的距离。

单位:

  • m 米
  • km 千米
  • mi 英里
  • ft 英尺
1
2
3
4
5
6
127.0.0.1:6379[7]> GEODIST china:city beijing shanghai   # 查看上海到北京的直线距离
"1067378.7564"
127.0.0.1:6379[7]> GEODIST china:city beijing chongqing # 查看北京到重庆的直线距离
"1464070.8051"
127.0.0.1:6379[7]> GEODIST china:city beijing chongqing km # 使用km为单位
"1464.0708"

附近的人?

获取所有人的地址,定位

通过半径来查询

GEORADIUS

以给定经纬度为中心

获取指定数量的人

所有的数据都录入到:china:city 中,这样才会让结果更加清楚

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
127.0.0.1:6379[7]> GEORADIUS china:city 110 30 1000 km  # 以110 30这个经纬度为中心,寻找方圆1000km内的城市
1) "chongqing"
2) "xian"
3) "shengzheng"
4) "hangzhou"
127.0.0.1:6379[7]> GEORADIUS china:city 110 30 500 km
1) "chongqing"
2) "xian"
127.0.0.1:6379[7]> GEORADIUS china:city 110 30 500 km withdist # 显示到中心距离的位置
1) 1) "chongqing"
2) "341.9374"
2) 1) "xian"
2) "483.8340"
127.0.0.1:6379[7]> GEORADIUS china:city 110 30 500 km withcoord # 显示他人的定位信息
1) 1) "chongqing"
2) 1) "106.49999767541885376"
2) "29.52999957900659211"
2) 1) "xian"
2) 1) "108.96000176668167114"
2) "34.25999964418929977"
127.0.0.1:6379[7]> GEORADIUS china:city 110 30 500 km withdist withcoord count 1 # 筛选出指定数量的结果
1) 1) "chongqing"
2) "341.9374"
3) 1) "106.49999767541885376"
2) "29.52999957900659211"

GEORADIUSBYMEMBER

找出位于指定元素周围的其它元素

1
2
3
4
5
6
127.0.0.1:6379[7]> GEORADIUSBYMEMBER china:city beijing 1000 km
1) "beijing"
2) "xian"
127.0.0.1:6379[7]> GEORADIUSBYMEMBER china:city shanghai 400 km
1) "hangzhou"
2) "shanghai"

GEOHASH

返回一个或多个位置元素的 Geohash 表示

该命令会返回11个字符的Geohash字符串

1
2
3
127.0.0.1:6379[7]> GEOHASH china:city beijing chongqing  # 将二维的经纬度转换为一纬的字符串,如果两个字符串越接近,则距离越近。
1) "wx4fbxxfke0"
2) "wm5xzrybty0"

GEO底层的实现原理其实就是Zset!所以可以使用Zset命令来操作GEO。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
127.0.0.1:6379[7]> ZRANGE china:city 0 -1
1) "chongqing"
2) "xian"
3) "shengzheng"
4) "hangzhou"
5) "shanghai"
6) "beijing"
127.0.0.1:6379[7]> ZREM china:city beijing
(integer) 1
127.0.0.1:6379[7]> ZRANGE china:city 0 -1
1) "chongqing"
2) "xian"
3) "shengzheng"
4) "hangzhou"
5) "shanghai"

Hyperloglog

什么是基数?

A{1,3,5,7,8,9,7} B{1,3,5,7,8}

基数(不重复的元素) = 5,可以接受误差!

简介

Redis 2.8.9版本就更新了Hyperloglog数据结构!

Redis hyperloglog 基数统计算法!

优点:占用的内存是固定的,2^64不同的元素的技术,只需要消耗12kb的内存!!如果从内存的角度来比较的画 Hyperloglog 是首选。

网页的UV(一个人访问一个网站多次,但还是算作一个人)

传统的方式:set保存用户的id,然后就可以统计set中的元素数量作为标准判断!

这个方式如果保存大量的用户id,就会比较麻烦!我们的目的是为了计数,而不是保存用户id;

0.81% 的错误率!统计UV任务。

测试使用:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
127.0.0.1:6379[7]> PFADD mykey a b c d e f g h i j  # 创建第一组元素  mykey
(integer) 1
127.0.0.1:6379[7]> PFCOUNT mykey # 统计mykey元素中的基数数量
(integer) 10
127.0.0.1:6379[7]> PFADD mykey2 a b c d e f g h i j k m a # 创建第二组元素 mykey2
(integer) 1
127.0.0.1:6379[7]> PFCOUNT mykey
(integer) 10
127.0.0.1:6379[7]> PFCOUNT mykey2
(integer) 12
127.0.0.1:6379[7]> PFMERGE mykey3 mykey mykey2 # 合并mykey和mykey2 => mykey3
OK
127.0.0.1:6379[7]> PFCOUNT mykey3 # 查看并集的数量
(integer) 12

如果允许容错,那么就一定可以使用 Hyperloglog!

如果不允许容错,则可以使用set或者自定义的数据结构。

Bitmap

位存储

统计疫情感染人数:0 1 0 1

统计用户信息:活跃和不活跃 登录和未登录 打卡,365打卡

两个状态的,都可以使用Bitmap位图,数据结构!都是操作二进制位来进行记录,就只有0和1两个状态!

365天 = 365bit 1Byte=8bit 49Byte

使用bitmap来记录一周的打卡情况

周一:1

周二:0

周三:1

周四:1

周五:0

周六:0

周日:0

1
2
3
4
5
6
7
8
9
10
11
12
13
14
127.0.0.1:6379[7]> SETBIT sign 0 1
(integer) 0
127.0.0.1:6379[7]> SETBIT sign 1 0
(integer) 0
127.0.0.1:6379[7]> SETBIT sign 2 0
(integer) 0
127.0.0.1:6379[7]> SETBIT sign 3 1
(integer) 0
127.0.0.1:6379[7]> SETBIT sign 4 1
(integer) 0
127.0.0.1:6379[7]> SETBIT sign 5 0
(integer) 0
127.0.0.1:6379[7]> SETBIT sign 6 0
(integer) 0

查看某一天是否打卡:

1
2
3
4
127.0.0.1:6379[7]> GETBIT sign 3
(integer) 1
127.0.0.1:6379[7]> GETBIT sign 6
(integer) 0

查看打卡的天数:

1
2
127.0.0.1:6379[7]> BITCOUNT sign
(integer) 3

Redis事务

MySQL:ACID

要么同时成功,要么同时失败,原子性!

Redis单条命令是保证原子性的,但是Redis事务是不保证原子性的!

Redis事务本质:一组命令的集合!一个事务中的所有命令都会被序列化,在事务执行过程中,会按照顺序执行。

—- 队列 set set set 队列 —

Redis事务没有隔离级别的概念

所有的命令在事务中,并没有直接被执行!只有发起执行命令的时候才会执行!Exec

Redis事务:

  • 开启事务(MULTI)
  • 命令入队
  • 执行事务(EXEC)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
127.0.0.1:6379[7]> MULTI   # 开启事务
OK
# 命令入队
127.0.0.1:6379[7]> set k1 v1
QUEUED
127.0.0.1:6379[7]> set k2 v2
QUEUED
127.0.0.1:6379[7]> get k2
QUEUED
127.0.0.1:6379[7]> set k3 v3
QUEUED
127.0.0.1:6379[7]> EXEC # 执行事务
1) OK
2) OK
3) "v2"
4) OK

放弃事务: DISCARD

1
2
3
4
5
6
7
8
9
10
11
12
127.0.0.1:6379[7]> MULTI   # 开启事务
OK
127.0.0.1:6379[7]> set k1 v1
QUEUED
127.0.0.1:6379[7]> set k2 v2
QUEUED
127.0.0.1:6379[7]> set k4 v4
QUEUED
127.0.0.1:6379[7]> DISCARD # 取消事务
OK
127.0.0.1:6379[7]> GET v4 # 事务队列中的命令都不会执行
(nil)

编译型异常(代码有问题!命令出错!),事务中所有的命令都不会被执行!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
127.0.0.1:6379[7]> MULTI
OK
127.0.0.1:6379[7]> set k1 v1
QUEUED
127.0.0.1:6379[7]> set k2 v2
QUEUED
127.0.0.1:6379[7]> set k3 v3
QUEUED
127.0.0.1:6379[7]> getset k3
(error) ERR wrong number of arguments for 'getset' command
127.0.0.1:6379[7]> set k4 v4
QUEUED
127.0.0.1:6379[7]> EXEC
(error) EXECABORT Transaction discarded because of previous errors.

运行时异常(1/0),如果事务队列中存在语法性错误,那么执行命令的时候,其它命令是可以正常执行的,错误命令会抛出异常。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
127.0.0.1:6379[7]> set k1 v1
OK
127.0.0.1:6379[7]> MULTI
OK
127.0.0.1:6379[7]> INCR k1 # 执行的时候会抛出异常,但是还是进入事务队列了
QUEUED
127.0.0.1:6379[7]> SET k2 v2
QUEUED
127.0.0.1:6379[7]> SET k3 v3
QUEUED
127.0.0.1:6379[7]> EXEC
1) (error) ERR value is not an integer or out of range
2) OK
3) OK

监控!Watch

悲观锁:悲观,认为什么时候都会出问题,无论做什么都加锁!

乐观锁:乐观,认为什么时候都不会出问题,所以不会上锁!更新数据的时候判断以下,在此期间是否有人修改过这个数据,version!

在MySQL中:

  1. 获取version
  2. 更新的时候比较version

锁:Redis可以实现乐观锁,使用watch

正常执行成功

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
127.0.0.1:6379[7]> set money 100
OK
127.0.0.1:6379[7]> set out 0
OK
127.0.0.1:6379[7]> watch money
OK
127.0.0.1:6379[7]> MULTI
OK
127.0.0.1:6379[7]> DECRBY money 20
QUEUED
127.0.0.1:6379[7]> INCRBY out 20
QUEUED
127.0.0.1:6379[7]> EXEC # 事务执行的时候,money没有发生改变,所以事务执行成功
1) (integer) 80
2) (integer) 20

模拟执行失败:线程1开启事务,在事务中更新值(更新的值需要使用watch进行监视),但还没有提交事务,此时线程2更新数据,线程1再执行事务的时候会失败!

线程1:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
127.0.0.1:6379[7]> set money 100
OK
127.0.0.1:6379[7]> set out 0
OK
127.0.0.1:6379[7]> WATCH monet
OK
127.0.0.1:6379[7]> flushdb
OK
127.0.0.1:6379[7]> set money 100
OK
127.0.0.1:6379[7]> set out 0
OK
127.0.0.1:6379[7]> WATCH money
OK
127.0.0.1:6379[7]> MULTI
OK
127.0.0.1:6379[7]> DECRBY money 20
QUEUED
127.0.0.1:6379[7]> INCRBY out 20
QUEUED
127.0.0.1:6379[7]> EXEC # 事务过程中,监视的变量的值被更改,事务失败
(nil)

线程2:

1
2
3
4
127.0.0.1:6379[7]> get money
"100"
127.0.0.1:6379[7]> set money 200
OK

如果事务执行失败,获取最新值就好

Redis的JavaAPI(Jedis)

使用Java来操作Redis。

什么是Jedis?

Jedis是Redis官方推荐的java连接开发工具!使用Java操作redis,那么一定要对Jedis十分熟悉!

测试:

  1. 导入对应的依赖
1
2
3
4
5
6
7
8
9
10
11
12
<dependencies>
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>3.3.0</version>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.57</version>
</dependency>
</dependencies>
  1. 编码测试

    1. 连接数据库
1
2
3
4
5
6
7
8
9
10
11
12
package cn.com.finlu;

import redis.clients.jedis.Jedis;

public class TestPing {
public static void main(String[] args) {
// 1. new Jedis 对象
Jedis jedis = new Jedis("127.0.0.1", 6379);
// jedis的API就是redis中的命令
System.out.println(jedis.ping());
}
}

输出:

测试常用的API

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
package cn.com.finlu;

import redis.clients.jedis.Jedis;

import java.util.Set;

public class TestKey {
private static final String HOST = "127.0.0.1";
private static final int PORT = 6379;
private static Jedis jedis = new Jedis(HOST, PORT);
public static void main(String[] argsjava) {
System.out.println("清空数据:" + jedis.flushDB());
System.out.println("判断某个键是否存在:" + jedis.exists("username"));
System.out.println("新增<'username', 'finlu'>的键值对:"+ jedis.set("username", "finlu"));
System.out.println("新增<'password', 'password'>的键值对:"+ jedis.set("password", "password"));
System.out.println("系统中所有的键如下:");
Set<String> keys = jedis.keys("*");
System.out.println(keys);
System.out.println("删除键password:" + jedis.del("password"));
System.out.println("判断键password是否存在:" + jedis.exists("password"));
System.out.println("查看键username存储的值的类型:" + jedis.type("username"));
System.out.println("随机返回key空间的一个:" + jedis.randomKey());
System.out.println("重命名key:" + jedis.rename("username", "name"));
System.out.println("取出改名后的name:" + jedis.get("name"));
System.out.println("按索引查询:" + jedis.select(0));
System.out.println("删除当前数据库中的所有key:" + jedis.flushDB());
System.out.println("返回当前数据库中key的数目:" + jedis.dbSize());
System.out.println("删除所有数据库中的所有key:" + jedis.flushAll());

}
}

输出结果:

Jedis测试事务以及使用Watch给数据加上乐观锁

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
package cn.com.finlu;

import redis.clients.jedis.Jedis;
import redis.clients.jedis.Transaction;

import java.sql.Time;
import java.util.List;
import java.util.Timer;

public class TestTX {
private static final String HOST = "127.0.0.1";
private static final int PORT = 6379;
private static Jedis jedis = new Jedis(HOST, PORT); // 客户端1
private static Jedis jedis2 = new Jedis(HOST, PORT); // 客户端2

private static void testSuccessTx() {
Transaction transaction = jedis.multi();
transaction.set("username", "finlu");
try {
transaction.exec();
} catch (Exception e) {
transaction.discard(); // 回滚事务
e.printStackTrace();
} finally {
System.out.println("username is " + jedis.get("username"));
}
}

public static void testFailTx() {
Transaction transaction = jedis.multi();
transaction.set("username", "finlu");
try {
int i = 1 / 0;
transaction.exec();
} catch (Exception e) {
transaction.discard(); // 回滚事务
e.printStackTrace();
} finally {
System.out.println("username is " + jedis.get("username"));
}
}

public static void testWatchThread1() {
jedis.set("username", "finlu");
String key = jedis.watch("username");
jedis.watch(key);
Transaction transaction = jedis.multi();
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
try {
transaction.set("test_tx", "v");
transaction.get("username");
List<Object> exec = transaction.exec();
// 当事务队列中没有任务的时候,返回的是一个空的ArrayList,事务执行成功的时候返回一个String的数组,String的值是和redis客户端直接操作返回的值一一致;
// 如果事务执行失败,则返回null
if (exec == null) {
System.out.println("watch检测的值被修改了,所以事务回滚了");
}
} catch (Exception e) {
transaction.discard(); // 回滚事务
e.printStackTrace();
} finally {
System.out.println("username is " + jedis.get("username"));
System.out.println("test_tx is " + jedis.get("test_tx"));
}
}

public static void testWatchThread2() {
// jedis2.set("username", "finlu_edited"); // 模拟在事务进行的过程中修改数据
}

public static void main(String[] args) {
jedis.flushDB();
testSuccessTx();
jedis.flushDB();
testFailTx();
// 客户端1线程
new Thread(new Runnable() {
@Override
public void run() {
testWatchThread1();
}
}).start();
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
// 客户端2线程
new Thread(new Runnable() {
@Override
public void run() {
testWatchThread2();
}
}).start();
// jedis.close(); // 关闭连接
}
}

执行结果:

Redis整合SpringBoot

在SpringBoot2.x之后,原来使用的jedis被替换成了lettuce。

jedis: 采用的是直连的,多个线程同时操作是不安全的;如果想要避免不安全的,需要使用jedis pool连接池! 更像BIO模式。

lettuce: 底层采用netty,实例可以在多个线程中进行共享,不存在线程不安全的情况!可以减少线程的数据了,更像NIO模式。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
@Bean
@ConditionalOnMissingBean(
name = {"redisTemplate"}
)
public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) throws UnknownHostException {
// 默认的RedisTemplate没有过多的设置,redis的对象都需要序列化
// 两个范型都是 <Object, Object> 类型,需要强制转换为 String, Object
RedisTemplate<Object, Object> template = new RedisTemplate();
template.setConnectionFactory(redisConnectionFactory);
return template;
}

@Bean
@ConditionalOnMissingBean
// 由于String是redis常用的类型,所以单独提取出来了一个Bean
public StringRedisTemplate stringRedisTemplate(RedisConnectionFactory redisConnectionFactory) throws UnknownHostException {
StringRedisTemplate template = new StringRedisTemplate();
template.setConnectionFactory(redisConnectionFactory);
return template;
}

整合测试

  1. 加入依赖
1
2
3
4
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
  1. 配置连接
1
2
3
4
5
6
# SpringBoot的所有配置类,都有一个自动配置类 RedisAutoConfiguration
# 自动配置类都会绑定一个 properties 配置文件 RedisProperties

# 配置redis
spring.redis.host=127.0.0.1
spring.redis.port=6379
  1. 测试

RedisTemplate序列化:

默认的序列化的方式是JDK序列化,之后可能会使用Json序列化

关于对象的保存:对象需要序列化

POJO对象使用的时候一般都会序列化

可以在自己的Redis配置类中配置序列化器,编写一个自己的RedisTemplate

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
package cn.com.finlu.springredisdemo;

import com.fasterxml.jackson.annotation.JsonAutoDetect;
import com.fasterxml.jackson.annotation.PropertyAccessor;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.jsontype.PolymorphicTypeValidator;
import org.springframework.boot.autoconfigure.data.redis.RedisAutoConfiguration;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;

import java.net.UnknownHostException;

@Configuration
public class RedisConfig {

// 编写自己的 redisTemplate
@Bean
public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) throws UnknownHostException {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(redisConnectionFactory);

// 序列化配置
Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class);
ObjectMapper om = new ObjectMapper();
om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);

// String的序列化
StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();

// Key采用String的序列化方式
template.setKeySerializer(stringRedisSerializer);
// hash的key也采用String的序列化方式
template.setHashKeySerializer(stringRedisSerializer);
// value的序列化方式采用jackson
template.setValueSerializer(jackson2JsonRedisSerializer);
// hash的value的序列化方式采用jackson
template.setHashValueSerializer(jackson2JsonRedisSerializer);

return template;
}
}

原来的在redis中存储的乱码解决了。

可以将常用的操作封装到RedisUtils中。

Redis配置文件详解

配置文件分析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
# redis启动命令
# ./redis-server /path/to/redis.conf

# redis的内存大小的单位是字节,且不区分大小写
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes

################################## INCLUDES ###################################

# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
# 可以引入其它配置文件
# include /path/to/local.conf
# include /path/to/other.conf

################################## MODULES #####################################

# Load modules at startup. If the server is not able to load modules
# it will abort. It is possible to use multiple loadmodule directives.
#
# loadmodule /path/to/my_module.so
# loadmodule /path/to/other_module.so

################################## NETWORK #####################################

# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1

# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
# "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
protected-mode yes

# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379

# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511

# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 700

# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0

# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
tcp-keepalive 300

################################# GENERAL #####################################

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize no

# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised no

# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis_6379.pid

# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice

# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""

# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no

# Specify the syslog identity.
# syslog-ident redis

# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0

# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16

# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY. Basically this means
# that normally a logo is displayed only in interactive sessions.
#
# However it is possible to force the pre-4.0 behavior and always show a
# ASCII art logo in startup logs by setting the following option to yes.
always-show-logo yes

################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""

save 900 1
save 300 10
save 60 10000

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes

# The filename where to dump the DB
dbfilename dump.rdb

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./

################################# REPLICATION #################################

# Master-Replica replication. Use replicaof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# +------------------+ +---------------+
# | Master | ---> | Replica |
# | (receive writes) | | (exact copy) |
# +------------------+ +---------------+
#
# 1) Redis replication is asynchronous, but you can configure a master to
# stop accepting writes if it appears to be not connected with at least
# a given number of replicas.
# 2) Redis replicas are able to perform a partial resynchronization with the
# master if the replication link is lost for a relatively small amount of
# time. You may want to configure the replication backlog size (see the next
# sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
# network partition replicas automatically try to reconnect to masters
# and resynchronize with them.
#
# replicaof <masterip> <masterport>

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
#
# masterauth <master-password>

# When a replica loses its connection with the master, or when the replication
# is still in progress, the replica can act in two different ways:
#
# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if replica-serve-stale-data is set to 'no' the replica will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
# SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
# COMMAND, POST, HOST: and LATENCY.
#
replica-serve-stale-data yes

# You can configure a replica instance to accept writes or not. Writing against
# a replica instance may be useful to store some ephemeral data (because data
# written on a replica will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default replicas are read-only.
#
# Note: read only replicas are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only replica exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only replicas using 'rename-command' to shadow all the
# administrative / dangerous commands.
replica-read-only yes

# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New replicas and reconnecting replicas that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the replicas.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the replicas incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to replica sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more replicas
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new replicas arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple replicas
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no

# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the replicas.
#
# This is important since once the transfer starts, it is not possible to serve
# new replicas arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more replicas arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5

# Replicas send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_replica_period option. The default value is 10
# seconds.
#
# repl-ping-replica-period 10

# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of replica.
# 2) Master timeout from the point of view of replicas (data, pings).
# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-replica-period otherwise a timeout will be detected
# every time there is low traffic between the master and the replica.
#
# repl-timeout 60

# Disable TCP_NODELAY on the replica socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to replicas. But this can add a delay for
# the data to appear on the replica side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the replica side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and replicas are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no

# Set the replication backlog size. The backlog is a buffer that accumulates
# replica data when replicas are disconnected for some time, so that when a replica
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the replica missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the replica can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a replica connected.
#
# repl-backlog-size 1mb

# After a master has no longer connected replicas for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last replica disconnected, for
# the backlog buffer to be freed.
#
# Note that replicas never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially
# resynchronize" with the replicas: hence they should always accumulate backlog.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600

# The replica priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a replica to promote into a
# master if the master is no longer working correctly.
#
# A replica with a low priority number is considered better for promotion, so
# for instance if there are three replicas with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the replica as not able to perform the
# role of master, so a replica with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
replica-priority 100

# It is possible for a master to stop accepting writes if there are less than
# N replicas connected, having a lag less or equal than M seconds.
#
# The N replicas need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the replica, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough replicas
# are available, to the specified number of seconds.
#
# For example to require at least 3 replicas with a lag <= 10 seconds use:
#
# min-replicas-to-write 3
# min-replicas-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-replicas-to-write is set to 0 (feature disabled) and
# min-replicas-max-lag is set to 10.

# A Redis master is able to list the address and port of the attached
# replicas in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover replica instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a master.
#
# The listed IP and address normally reported by a replica is obtained
# in the following way:
#
# IP: The address is auto detected by checking the peer address
# of the socket used by the replica to connect with the master.
#
# Port: The port is communicated by the replica during the replication
# handshake, and is normally the port that the replica is using to
# listen for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the replica may be actually reachable via different IP and port
# pairs. The following two options can be used by a replica in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
# replica-announce-ip 5.5.5.5
# replica-announce-port 1234

################################## SECURITY ###################################

# Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared

# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to replicas may cause problems.

################################### CLIENTS ####################################

# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 10000

############################## MEMORY MANAGEMENT ################################

# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have replicas attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the replicas are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of replicas is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have replicas attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for replica
# output buffers (but this is not needed if the policy is 'noeviction').
#
# maxmemory <bytes>

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are no suitable keys for eviction.
#
# At the date of writing these commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction

# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
#
# maxmemory-samples 5

# Starting from Redis 5, by default a replica will ignore its maxmemory setting
# (unless it is promoted to master after a failover or manually). It means
# that the eviction of keys will be just handled by the master, sending the
# DEL commands to the replica as keys evict in the master side.
#
# This behavior ensures that masters and replicas stay consistent, and is usually
# what you want, however if your replica is writable, or you want the replica to have
# a different memory setting, and you are sure all the writes performed to the
# replica are idempotent, then you may change this default (but be sure to understand
# what you are doing).
#
# Note that since the replica by default does not evict, it may end using more
# memory than the one set via maxmemory (there are certain buffers that may
# be larger on the replica, or data structures may sometimes take more memory and so
# forth). So make sure you monitor your replicas and make sure they have enough
# memory to never hit a real out-of-memory condition before the master hits
# the configured maxmemory setting.
#
# replica-ignore-maxmemory yes

############################# LAZY FREEING ####################################

# Redis has two primitives to delete keys. One is called DEL and is a blocking
# deletion of the object. It means that the server stops processing new commands
# in order to reclaim all the memory associated with an object in a synchronous
# way. If the key deleted is associated with a small object, the time needed
# in order to execute the DEL command is very small and comparable to most other
# O(1) or O(log_N) commands in Redis. However if the key is associated with an
# aggregated value containing millions of elements, the server can block for
# a long time (even seconds) in order to complete the operation.
#
# For the above reasons Redis also offers non blocking deletion primitives
# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
# FLUSHDB commands, in order to reclaim memory in background. Those commands
# are executed in constant time. Another thread will incrementally free the
# object in the background as fast as possible.
#
# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
# It's up to the design of the application to understand when it is a good
# idea to use one or the other. However the Redis server sometimes has to
# delete keys or flush the whole database as a side effect of other operations.
# Specifically Redis deletes objects independently of a user call in the
# following scenarios:
#
# 1) On eviction, because of the maxmemory and maxmemory policy configurations,
# in order to make room for new data, without going over the specified
# memory limit.
# 2) Because of expire: when a key with an associated time to live (see the
# EXPIRE command) must be deleted from memory.
# 3) Because of a side effect of a command that stores data on a key that may
# already exist. For example the RENAME command may delete the old key
# content when it is replaced with another one. Similarly SUNIONSTORE
# or SORT with STORE option may delete existing keys. The SET command
# itself removes any old content of the specified key in order to replace
# it with the specified string.
# 4) During replication, when a replica performs a full resynchronization with
# its master, the content of the whole database is removed in order to
# load the RDB file just transferred.
#
# In all the above cases the default is to delete objects in a blocking way,
# like if DEL was called. However you can configure each case specifically
# in order to instead release memory in a non-blocking way like if UNLINK
# was called, using the following configuration directives:

lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.

appendonly no

# The name of the append only file (default: "appendonly.aof")

appendfilename "appendonly.aof"

# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".

# appendfsync always
appendfsync everysec
# appendfsync no

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.

no-appendfsync-on-rewrite no

# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes

# When rewriting the AOF file, Redis is able to use an RDB preamble in the
# AOF file for faster rewrites and recoveries. When this option is turned
# on the rewritten AOF file is composed of two different stanzas:
#
# [RDB file][AOF tail]
#
# When loading Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF
# tail.
aof-use-rdb-preamble yes

################################ LUA SCRIPTING ###############################

# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000

################################ REDIS CLUSTER ###############################

# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes

# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf

# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000

# A replica of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a replica to actually have an exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple replicas able to failover, they exchange messages
# in order to try to give an advantage to the replica with the best
# replication offset (more data from the master processed).
# Replicas will try to get their rank by offset, and apply to the start
# of the failover a delay proportional to their rank.
#
# 2) Every single replica computes the time of the last interaction with
# its master. This can be the last ping or command received (if the master
# is still in the "connected" state), or the time that elapsed since the
# disconnection with the master (if the replication link is currently down).
# If the last interaction is too old, the replica will not try to failover
# at all.
#
# The point "2" can be tuned by user. Specifically a replica will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
# (node-timeout * replica-validity-factor) + repl-ping-replica-period
#
# So for example if node-timeout is 30 seconds, and the replica-validity-factor
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
# replica will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large replica-validity-factor may allow replicas with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a replica at all.
#
# For maximum availability, it is possible to set the replica-validity-factor
# to a value of 0, which means, that replicas will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-replica-validity-factor 10

# Cluster replicas are able to migrate to orphaned masters, that are masters
# that are left without working replicas. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working replicas.
#
# Replicas migrate to orphaned masters only if there are still at least a
# given number of other working replicas for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a replica
# will migrate only if there is at least 1 other working replica for its master
# and so forth. It usually reflects the number of replicas you want for every
# master in your cluster.
#
# Default is 1 (replicas migrate only if their masters remain with at least
# one replica). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1

# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes

# This option, when set to yes, prevents replicas from trying to failover its
# master during master failures. However the master can still perform a
# manual failover, if forced to do so.
#
# This is useful in different scenarios, especially in the case of multiple
# data center operations, where we want one side to never be promoted if not
# in the case of a total DC failure.
#
# cluster-replica-no-failover no

# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.

########################## CLUSTER DOCKER/NAT support ########################

# In certain deployments, Redis Cluster nodes address discovery fails, because
# addresses are NAT-ted or because ports are forwarded (the typical case is
# Docker and other containers).
#
# In order to make Redis Cluster working in such environments, a static
# configuration where each node knows its public address is needed. The
# following two options are used for this scope, and are:
#
# * cluster-announce-ip
# * cluster-announce-port
# * cluster-announce-bus-port
#
# Each instruct the node about its address, client port, and cluster message
# bus port. The information is then published in the header of the bus packets
# so that other nodes will be able to correctly map the address of the node
# publishing the information.
#
# If the above options are not used, the normal Redis Cluster auto-detection
# will be used instead.
#
# Note that when remapped, the bus port may not be at the fixed offset of
# clients port + 10000, so you can specify any port and bus-port depending
# on how they get remapped. If the bus-port is not set, a fixed offset of
# 10000 will be used as usually.
#
# Example:
#
# cluster-announce-ip 10.1.1.5
# cluster-announce-port 6379
# cluster-announce-bus-port 6380

################################## SLOW LOG ###################################

# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.

# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000

# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128

################################ LATENCY MONITOR ##############################

# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0

############################# EVENT NOTIFICATION ##############################

# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
#
# The "notify-keyspace-events" takes as argument a string that is composed
# of zero or multiple characters. The empty string means that notifications
# are disabled.
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
#
# notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""

############################### ADVANCED CONFIG ###############################

# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64

# Lists are also encoded in a special way to save a lot of space.
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# For a fixed maximum size, use -5 through -1, meaning:
# -5: max size: 64 Kb <-- not recommended for normal workloads
# -4: max size: 32 Kb <-- not recommended
# -3: max size: 16 Kb <-- probably not recommended
# -2: max size: 8 Kb <-- good
# -1: max size: 4 Kb <-- good
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
list-max-ziplist-size -2

# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression. The head and tail of the list
# are always uncompressed for fast push/pop operations. Settings are:
# 0: disable all list compression
# 1: depth 1 means "don't start compressing until after 1 node into the list,
# going from either the head or tail"
# So: [head]->node->node->...->node->[tail]
# [head], [tail] will always be uncompressed; inner nodes will compress.
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
# 2 here means: don't compress head or head->next or tail->prev or tail,
# but compress all nodes between them.
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# etc.
list-compress-depth 0

# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512

# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64

# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000

# Streams macro node max size / items. The stream data structure is a radix
# tree of big nodes that encode multiple items inside. Using this configuration
# it is possible to configure how big a single node can be in bytes, and the
# maximum number of items it may contain before switching to a new node when
# appending new stream entries. If any of the following settings are set to
# zero, the limit is ignored, so for instance it is possible to set just a
# max entires limit by setting max-bytes to 0 and max-entries to the desired
# value.
stream-node-max-bytes 4096
stream-node-max-entries 100

# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes

# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# replica -> replica clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and replica clients, since
# subscribers and replicas receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

# Client query buffers accumulate new commands. They are limited to a fixed
# amount by default in order to avoid that a protocol desynchronization (for
# instance due to a bug in the client) will lead to unbound memory usage in
# the query buffer. However you can configure it here if you have very special
# needs, such us huge multi/exec requests or alike.
#
# client-query-buffer-limit 1gb

# In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited ot 512 mb. However you can change this limit
# here.
#
# proto-max-bulk-len 512mb

# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10

# Normally it is useful to have an HZ value which is proportional to the
# number of clients connected. This is useful in order, for instance, to
# avoid too many clients are processed for each background task invocation
# in order to avoid latency spikes.
#
# Since the default HZ value by default is conservatively set to 10, Redis
# offers, and enables by default, the ability to use an adaptive HZ value
# which will temporary raise when there are many connected clients.
#
# When dynamic HZ is enabled, the actual configured HZ will be used as
# as a baseline, but multiples of the configured HZ value will be actually
# used as needed once more clients are connected. In this way an idle
# instance will use very little CPU time while a busy instance will be
# more responsive.
dynamic-hz yes

# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes

# When redis saves RDB file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
rdb-save-incremental-fsync yes

# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
# idea to start with the default settings and only change them after investigating
# how to improve the performances and how the keys LFU change over time, which
# is possible to inspect via the OBJECT FREQ command.
#
# There are two tunable parameters in the Redis LFU implementation: the
# counter logarithm factor and the counter decay time. It is important to
# understand what the two parameters mean before changing them.
#
# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
# uses a probabilistic increment with logarithmic behavior. Given the value
# of the old counter, when a key is accessed, the counter is incremented in
# this way:
#
# 1. A random number R between 0 and 1 is extracted.
# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
# 3. The counter is incremented only if R < P.
#
# The default lfu-log-factor is 10. This is a table of how the frequency
# counter changes with a different number of accesses with different
# logarithmic factors:
#
# +--------+------------+------------+------------+------------+------------+
# | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits |
# +--------+------------+------------+------------+------------+------------+
# | 0 | 104 | 255 | 255 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 1 | 18 | 49 | 255 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 10 | 10 | 18 | 142 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 100 | 8 | 11 | 49 | 143 | 255 |
# +--------+------------+------------+------------+------------+------------+
#
# NOTE: The above table was obtained by running the following commands:
#
# redis-benchmark -n 1000000 incr foo
# redis-cli object freq foo
#
# NOTE 2: The counter initial value is 5 in order to give new objects a chance
# to accumulate hits.
#
# The counter decay time is the time, in minutes, that must elapse in order
# for the key counter to be divided by two (or decremented if it has a value
# less <= 10).
#
# The default value for the lfu-decay-time is 1. A Special value of 0 means to
# decay the counter every time it happens to be scanned.
#
# lfu-log-factor 10
# lfu-decay-time 1

########################### ACTIVE DEFRAGMENTATION #######################
#
# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested
# even in production and manually tested by multiple engineers for some
# time.
#
# What is active defragmentation?
# -------------------------------
#
# Active (online) defragmentation allows a Redis server to compact the
# spaces left between small allocations and deallocations of data in memory,
# thus allowing to reclaim back memory.
#
# Fragmentation is a natural process that happens with every allocator (but
# less so with Jemalloc, fortunately) and certain workloads. Normally a server
# restart is needed in order to lower the fragmentation, or at least to flush
# away all the data and create it again. However thanks to this feature
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
# in an "hot" way, while the server is running.
#
# Basically when the fragmentation is over a certain level (see the
# configuration options below) Redis will start to create new copies of the
# values in contiguous memory regions by exploiting certain specific Jemalloc
# features (in order to understand if an allocation is causing fragmentation
# and to allocate it in a better place), and at the same time, will release the
# old copies of the data. This process, repeated incrementally for all the keys
# will cause the fragmentation to drop back to normal values.
#
# Important things to understand:
#
# 1. This feature is disabled by default, and only works if you compiled Redis
# to use the copy of Jemalloc we ship with the source code of Redis.
# This is the default with Linux builds.
#
# 2. You never need to enable this feature if you don't have fragmentation
# issues.
#
# 3. Once you experience fragmentation, you can enable this feature when
# needed with the command "CONFIG SET activedefrag yes".
#
# The configuration parameters are able to fine tune the behavior of the
# defragmentation process. If you are not sure about what they mean it is
# a good idea to leave the defaults untouched.

# Enabled active defragmentation
# activedefrag yes

# Minimum amount of fragmentation waste to start active defrag
# active-defrag-ignore-bytes 100mb

# Minimum percentage of fragmentation to start active defrag
# active-defrag-threshold-lower 10

# Maximum percentage of fragmentation at which we use maximum effort
# active-defrag-threshold-upper 100

# Minimal effort for defrag in CPU percentage
# active-defrag-cycle-min 5

# Maximal effort for defrag in CPU percentage
# active-defrag-cycle-max 75

# Maximum number of set/hash/zset/list fields that will be processed from
# the main dictionary scan
# active-defrag-max-scan-fields 1000

配置文件对大小写不敏感

网络:

1
2
3
bind 127.0.0.1  #  绑定的ip
protected-mode yes # 保护模式
port 6379 # 端口设置

通用配置(GENERAL)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
daemonize no  # 以守护进程的方式运行,默认是no,单机运行的时候可以设置为yes,在docker中需要设置为no
supervised no # 管理守护进程
pidfile /var/run/redis_6379.pid # redis运行的pid文件(以后台方式运行的时候需要一个pid文件)

# 日志信息,默认生产环境使用
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
logfile "" # 日志的文件位置名
databases 16 # 数据库数量


always-show-logo yes # 是否显示logo

快照:SNAPSHOTTING

持久化,在贵定的时间内,执行了多少次操作后会持久化到.rdb 或者 .aof文件

1
2
3
4
5
6
7
8
9
10
11
12
13
# 如果900s内,如果至少有1个key进行了修改,就进行持久化操作
save 900 1
# 如果300s内,如果至少有10个key进行了修改,就进行持久化操作
save 300 10
# 如果600s内,如果至少有10000个key进行了修改,就进行持久化操作
save 60 10000


stop-writes-on-bgsave-error yes # 持久化出错后是否继续操作
rdbcompression yes # 是否压缩rdb文件,需要消耗CPU资源
rdbchecksum yes # 保存rdb文件的时候进行校验
dbfilename dump.rdb # rdb文件的保存目录
dir ./ # RDB文件的根目录

SECURITY

1
2
3
4
5
127.0.0.1:6379> config get requirepass  # 查看配置文件的requirepass配置
1) "requirepass"
2) ""
auth 密码 # 登录redis
requirepass foobared # 设置redis的密码

限制Client

1
2
3
4
5
6
7
8
9
10
11
12
13
maxclients 10000  # 最大的连接客户端
maxmemory <bytes> # 最大的内存容量



maxmemory-policy noeviction # 内存到达上限的策略

noeviction: 不删除策略, 达到最大内存限制时, 如果需要更多内存, 直接返回错误信息。(默认值)
allkeys-lru: 所有key通用; 优先删除最近最少使用(less recently used ,LRU) 的 key。
volatile-lru: 只限于设置了 expire 的部分; 优先删除最近最少使用(less recently used ,LRU) 的 key。
allkeys-random: 所有key通用; 随机删除一部分 key。
volatile-random: 只限于设置了 expire 的部分; 随机删除一部分 key。
volatile-ttl: 只限于设置了 expire 的部分; 优先删除剩余时间(time to live,TTL) 短的key。

APPEND ONLY模式 aof配置

1
2
3
4
5
appendonly no  # 默认是不开启aof的,默认是使用rdb的方式进行持久化的,大部分情况下都使用rdb
appendfilename "appendonly.aof" # 持久化文件的名字
appendfsync everysec # 每秒执行一次同步
always 每次修改都会写入
no 不执行同步,操作系统自己同步数据,速度最快

Redis的单线程模型

为什么使用单线程?

在多数情况下,多线程的速度都要比单线程快,但是Redis却使用了单线程而不是多线程。原因大概有以下几点:

  1. 单线程的实现简单,降低了数据结构和算法的设计与实现难度
  2. Redis是基于内存的,CPU不是其瓶颈,内存才是其瓶颈

单线程模型

Redis的客户端对服务端的每次调用都经历了发送命令,执行命令,返回结果三个阶段。

所有客户端向服务端发起连接的时候,服务端都会创建一个socket与其连接。IO多路复用程序是一个单线程的程序,通过轮询来监控所有的socket,但是IO多路复用程序只负责监控socket接受命令所形成的AE_READABLE,获取到命令后就将其加入到队列中,并不直接执行,所以IO多路复用程序是非阻塞的。命令的真正执行者是基于内存的Redis单线程。

Redis持久化

Redis是内存数据库,如果不将数据保存到磁盘中,则一旦服务进程退出,数据库中的数据也会丢失!所以redis为我们提供了持久化的功能!

在主从复制中,rdb就是一个备用的。

RDB(Redis DataBase)

在指定的时间间隔内将内存中的数据写入磁盘,其实就相当于一个快照,当需要恢复的时候就是将快照中的文件读入到内存中。

Redis会fork一个子进程来进行持久化,首先将数据写入到一个临时文件中,等到持久化的过程都结束了,再使用这个临时文件替换上次持久化好的文件。在整个过程中,主进程不进行任何的IO操作,所以可以确保极高的性能。如果需要大规模的数据恢复,且对于数据恢复的完整性不是非常敏感的化,使用RDB的方式比AOF的方式更加高效。但是RDB的缺点是最后一次持久化的时候,数据可能丢失。

默认使用的就是RDB,一般情况下不需要修改这个配置。

在生产环境需要将dump.rdb进行备份

EDB保存的文件默认是:dump.rdb

关机(HSUTDOWN命令)后数据依旧存在,同时会触发rdb规则,生成一个rdb文件

flushall会触发rdb规则也会默认产生一个rdb文件

如何恢复rdb文件?

将rdb文件放到redis的启动目录就可以了,redis启动的时候会自动检查dump.rdb文件,然后将数据载入内存中。

1
2
3
127.0.0.1:6379> config get dir  # 查找配置文件的路径,如果在这些目录下存在 dump.rdb 文件,则会自动恢复数据
1) "dir"
2) "/data"

优点:

  1. 适合大规模的数据恢复! dump.rdb
  2. 如果对数据完整性要求不高的时候可以使用

缺点:

  1. 需要一定的时间间隔进程操作!如果redis宕机后,最后一次修改的数据就没有了
  2. fork进程的时候,会占用一定的内存空间

AOF(Append Only File)

将所有的命令都记录下来,恢复的时候就把这个文件全部再执行一遍

默认是不开启的,需要手动开启配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
appendonly no

# The name of the append only file (default: "appendonly.aof")

appendfilename "appendonly.aof"

# appendfsync always
appendfsync everysec
# appendfsync no


# 重写:aof默认的是文件的无限追加,文件会越来越大
# 如果aof文件大于64M,redis会fork一个新进程将文件进行重写
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

如果AOF文件有问题的话,无法开启redis

可以使用 redis-check-aof --fix 来修复aof文件,修复成功后重启就可以重新恢复了

优点:

  1. 每次修改都会同步,文件的完整性更好
  2. 每秒同步一次,会丢失一秒的数据
  3. 从不同步,效率最高

缺点:

  1. 相对于数据文件来说,aof远大于rdb,修复的速度也比rdb慢
  2. aof运行的效率也比rdb慢,所以redis默认的配置是使用rdb

扩展

  1. RDB持久化方式能够在指定的时间间隔内对数据进行快照式存储

  2. AOF持久化的操作记录每次对服务器的写操作,当服务器重启的时候会重新执行这些命令来恢复原始的数据,AOF模式通过redis的协议将每次的写操作记录到文件末尾,Redis还能对AOF文件进行后台重写,使得AOF文件的体积不至于过大。

  3. 如果只是做缓存,则数据只是在服务器运行的时候存在,不需要做任何持久化的操作

  4. 同时开启持久化

    1. 在这种情况下,redis会优先载入AOF文件来恢复原始的数据,因为在通常情况下,AOF文件保存的数据集要比rdb文件保存的数据更加完整
    2. RDB的数据不实时,同时使用两者的话服务器重启的时候也只会找AOF文件。
    3. 建议使用RDB模式。原因:1⃣️RDB的文件更好备份,而AOF文件在不断变化,不好备份;2⃣️使用rdb文件重启的速度快,AOF文件可能存在潜在的BUG
  5. 性能建议

    1. 由于RDB文件只是用作后备的用途,所以在Slave节点上只需要15分钟配置一次就行了,只保留: save 900 1 这条规则即可。
    2. 如果开启AOF,好处是即使在最恶劣的情况下也不会丢失超过2s的数据,而且启动文件的时候只需要load一个aof文件就行了,代价是带来了持续的IO,而且在最后重写到新文件的过程中会造成阻塞,如果硬盘允许,应该减少重写的次数。
    3. 如果不开启AOF,仅仅依靠Master-Slave复制来实现高可用也可以,这样可以节省IO,但是如果Salve和Master同时挂了「断电」的话可能会丢失十几分钟的数据,启动脚本的时候需要比较两个RDB文件,加载最新的那个,微博就是使用的这个架构。

Redis发布订阅

参考:https://www.runoob.com/redis/redis-pub-sub.html

微信公众号

通信 队列 发送者 订阅者

Redis的发布订阅也是一种消息通信模式:发送者发送信息,订阅者接受信息。

Redis可以订阅任意多的频道。

发布者:发布者发布消息到频道

订阅者:需要先订阅消息

当用户订阅了reids的频道后,其就被放在了一个队列中,往这个频道推送的所有消息都会发送给订阅了这个频道的人。

应用场景:

  1. 实时消息系统可以做
  2. 实时聊天:频道当作聊天室,将信息回显给所有人即可
  3. 订阅,关注系统

复杂的场景回使用消息中间件来做:MQ,Kafuka

Redis主从复制

高可用:主从复制 哨兵模式

概念

主从复制就是将一台Redis服务器的数据复制到其它的Redis服务器,前者称为Master节点,后者称为Slave节点;数据的复制是单向的,只能由主节点到从节点。Master以写为主,Slave以读为主。

默认情况下,每台Redis服务器都是主节点;且一个主节点可以有多个从节点(或者没有从节点),但一个从节点只能有一个主节点。

主从复制的作用:

  1. 数据冗余:主从复制实现了数据的热备份,是持久化之外的另外一种数据冗余方式。
  2. 故障修复:当主节点出现问题时,可以由从节点提供服务,实现快速的故障修复;实际上是一种服务的冗余。
  3. 负载均衡:在主从复制的基础上,配合读写分离,可以由主节点提供写服务,从节点提供读服务,分担服务器负载;尤其是在写少读多的情况下,通过多个节点来分担读负载,可以大量提高Redis服务器的并发量。
  4. 高可用基石:主从复制是哨兵和集群能够实施的基础,因此可以说主从复制是高可用的基础。

一般来说,如果要将Redis运用到项目中,只使用一台Redis是万万不能的,原因为:

  1. 从结构上来说,单个Redis机器可能会发生单点故障,并且一台服务器需要所有的负载,压力大;
  2. 从容量上来说,单个的Redis服务器的容量有限。一般来说,单台Redis最大使用的内存不超过20G;

电商项目一般都是多读少写的,就是:一次上传,多次浏览。对于这种场景,就可以使用主从复制的架构。

主从复制:解决读写分离!80%的情况下都是使用的读操作!减少来服务器的压力!最低配置:一主二从

环境配置

只需要配置Slave节点,不需要配置Master节点。

1
2
3
4
5
6
7
8
9
10
11
12
127.0.0.1:6379> info replication   # 查看当前库的信息
# Replication
role:master # 角色:主机
connected_slaves:0 # 连接的从机
master_replid:597f150c5c0ec5b69c0bd2293b2ffd0fdafcc380
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

Master节点(所有待配置的集群机器)中修改配置:

1
2
3
bind 0.0.0.0
protected-mode no
daemonize yes

修改完成后启动redis服务。

如果客户端连接的时候出现: Could not connect to Redis at 192.168.93.100:6379: No route to host 错误,引起问题的原因是防火墙。则需要把Master端的防火墙进行设置。

解决方案:

  1. 直接关闭防火墙(不推荐)
1
2
3
4
5
6
7
systemctl stop firewalld.service   # 关闭防火墙

# 其它相关命令
#显示服务的状态
systemctl status firewalld.service
#启动防火墙
systemctl start firewalld.service
  1. 开放端口
1
2
3
4
5
6
7
8
9
10
11
$ firewall-cmd --add-port=6379/tcp --permanent   # 开放6379端口
$ firewall-cmd --reload # 防火墙规则生效


# 其它相关命令
# 查看开放的端口
$ firewall-cmd --list-ports
# 查询6379端口是否开放
$ firewall-cmd --query-port=6379/tcp
# 移除6379端口
$ firewall-cmd --permanent --remove-port=6379/tcp

Slave节点配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
192.168.93.100:6379> SLAVEOF 192.168.93.100 6379   # 设置主机地址
OK
192.168.93.100:6379> info replication # 主从复制的信息
# Replication
role:slave # 当前为的角色为slave
master_host:192.168.93.100
master_port:6379
master_link_status:down
master_last_io_seconds_ago:-1
master_sync_in_progress:0
slave_repl_offset:0
master_link_down_since_seconds:1591356344
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:0cafb70f08e40d8aa49bbb5950ca57df2f1b139f
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

配置完成后,可以在主机中找到Slave节点的地址:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 配置两台从机
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:2 # 连接的节点数
# slave节点信息
slave0:ip=192.168.93.101,port=6379,state=online,offset=28,lag=0
slave1:ip=192.168.93.102,port=6379,state=online,offset=28,lag=1
master_replid:487d26657b69cebbdcc4a54c4a2ce07d06a4fc23
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:28
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:28

以上是通过命令的方式配置主从,是暂时性配置,实际上应该在配置文件中配置主从,这样的话是永久的。

只需要配置从机即可,配置方式如下:

1
2
3
replica-read-only yes  # 配置从机可读
replicaof 192.168.93.100 6379
masterauth <master-password>

配置完成后,启动redis服务,在主节点处能够看到当前Slave节点的信息。

主机负责写,从机负责读。主机中的所有数据都会被从机保存。

主机写:

从机读:

从机如果写的话会报错。

如果主机挂了,从机不会自动变为主机。

主机断开连接,从机依旧连接到主机,但是没有写操作了,如果主机回来了,从机依然可以读取到主机中的信息。

如果是使用的命令行来配置的主从,则从机重启后会自动变为主机!但是配置为从机后又能读取到主机中的数据。

复制的原理

Slave启动成功连接到master后会发送一个sync同步命令

Master接收到命令,启动后台的存盘进程,同时收集所有接收到的用于修改数据集的命令,在后台进程执行完毕之后,master将传送整个数据文件到slave,并完成一次完全同步。

全量复制:Slave服务在接收到数据库文件数据后,将其存盘并加载到内存中

增量复制:master继续将新的所有收集的修改命令依次传给slave,完成同步

只要重新连接master,一次同步(全量复制)将会被自动执行!之后的数据就可以在从机中看到。

分布式连接模式:

模式1:

模式2:

也可以完成主从复制!

如果Master节点挂了,则需要手动来指定Master节点。

1
SLAVEOF no one   # 如果主机断开了,可以使用这个命令来让自己变成Master节点

Redis缓存存在的问题和解决方案

为了解决服务的高可用问题!

Redis缓存的使用,极大的提升了应用程序的性能和效率,特别是数据查询方面。但同时,它也带来了一些问题。其中,最要害的问题,就是数据的一致性问题,从严格意义上讲,这个问题无解。如果对数据的一致性要求很高,那么就不能使用缓存。

另外的一些典型问题就是,缓存穿透、缓存雪崩和缓存击穿。目前,业界也都有比较流行的解决方案。

带缓存的数据请求流程

缓存穿透(查不到数据/查询一个不存在的数据)

概念

缓存穿透的概念很简单,用户想要查询一个数据,发现redis内存数据库没有,也就是缓存没有命中,于是向持久层数据库查询。发现也没有,于是本次查询失败。当用户很多的时候,缓存都没有命中(秒杀!),于是都去请求了持久层数据库。这会给持久层数据库造成很大的压力,这时候就相当于出现了缓存穿透。

原因

  1. Redis集群大面积故障

解决方案

布隆过滤器

布隆过滤器是一种数据结构,对所有可能查询的参数以hash形式存储,在控制层先进行校验,不符合则丢弃,从而避免了对底层存储系统的查询压力;

布隆过滤器

缓存空对象

当存储层不命中后,即使返回的空对象也将其缓存起来,同时会设置一个过期时间,之后再访问这个数据将会从缓存中获取,保护了后端数据源。

狂神说Redis笔记2

但是这种方法会存在两个问题:

  1. 如果空值能够被缓存起来,这就意味着缓存需要更多的空间存储更多的键,因为这当中可能会有很多的空值的键;
  2. 即使对空值设置了过期时间,还是会存在缓存层和存储层的数据会有一段时间窗口的不一致,这对于需要保持一致性的业务会有影响。

缓存击穿(同一个key的访问量太大/缓存过期)

概述

这里需要注意和缓存击穿的区别,缓存击穿,是指一个key非常热点,在不停的扛着大并发,大并发集中对这一个点进行访问,当这个key在失效的瞬间,持续的大并发就穿破缓存,直接请求数据库,就像在一个屏障上凿开了一个洞。当某个key在过期的瞬间,有大量的请求并发访问,这类数据一般是热点数据,由于缓存过期,会同时访问数据库来查询最新数据,并且回写缓存,会导使数据库瞬间压力过大。

解决方案

设置热点数据永不过期

从缓存层面来看,没有设置过期时间,所以不会出现热点 key 过期后产生的问题。

加互斥锁

分布式锁:使用分布式锁,保证对于每个key同时只有一个线程去查询后端服务,其他线程没有获得分布式锁的权限,因此只需要等待即可。这种方式将高并发的压力转移到了分布式锁,因此对分布式锁的考验很大。

互斥锁

缓存雪崩(缓存失效/不可用)

概念

缓存雪崩,是指在某一个时间段,缓存集中过期失效。这样就有可能导致其它依赖缓存来抵挡请求压力的服务出现级联的不可用现象。

缓存的工作原理图:

缓存工作原理图

例子:

​ 在双十二零点的时候,会迎来一波抢购,这波商品事先比较集中的放入了缓存,假设缓存时间为一个小时。那么到了凌晨一点钟的时候,这批商品的缓存就都过期了。这时对这批商品的访问查询,都落到了数据库上。那么对于数据库而言,就会产生周期性的压力波峰。于是所有的请求都会达到存储层,存储层的调用量会暴增,造成存储层也会挂掉的情况。其实集中过期,倒不是非常致命,比较致命的缓存雪崩,是缓存服务器某个节点宕机或断网。自然形成的缓存雪崩,一定是在某个时间段集中创建缓存,这个时候,数据库也是可以顶住压力的,只是会对数据库产生周期性的压力而已;而缓存服务节点的宕机,对数据库服务器造成的压力是不可预知的,很有可能瞬间就把数据库压垮。

原因

根本原因:缓存层出现故障导致其他服务需要承受巨大的压力。

出现雪崩的直接原因:

  1. Redis集群大面积出现故障(物理故障或者软件故障)
  2. 缓存失效但此时依然有大量请求访问缓存服务器(Redis)
  3. 缓存服务不可用(Key失效等)导致大量的请求转向MySQL数据库,导致服务的级联“崩溃”,最后导致服务器宕机。

解决方案

Redis高可用

Redis集群

这个思想的含义是,既然redis有可能挂掉,那我多增设几台redis,这样一台挂掉之后其他的还可以继续工作,其实就是搭建的集群(异地多活!)。

Redis Sentinel 可以保障Redis集群的高可用性。

限流降级

限流降级这个解决方案的思想是,在缓存失效后,通过加锁或者队列来控制读数据库写缓存的线程数量。比如对某个key只允许一个线程查询数据和写缓存,其他线程等待。

系统可以通过某些数据来实现自动降级,也可以配合运维进行人工降级。

数据预热

数据加热的含义就是在正式部署之前,我先把可能的数据先预先访问一遍,这样部分可能大量访问的数据就会被提前加载到缓存中。在即将发生大并发访问前手动触发加载缓存不同的key,设置不同的过期时间,让缓存失效的时间点尽量均匀。

#

评论

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×