Saturn

The devil is in the details.

0%

Distributed protocol

Distributed protocol

Consensus Algorithm

满足三个条件:

non-triviality:if the value v is decided,then v must have been proposed by a proposer

safety & safe learning:任意两个proposr,得知decided value is a andf b,必有a=b

classic paxos

two phase:

  1. reading phase
  2. writing phase

properties for proposer

  1. epoch
  2. proppser只在收到大部分acceptor的promise后开始提议
  3. proposer只在收到大部分acceptor的accepts后返回最终的决议值
  4. 如果之前没有确定value,任何value可以被propose,如果有,则epoch最高的value被propose
  5. 被proposer使用的epoch大于之前所有被使用过得epoch

properties for acceptor:

  1. acceptor只处理收到的的prepare或者propose中epoch>= last promised epoch的消息
  2. For each prepare or propose message received, the acceptor’s last promised epoch is set to the epoch received. This is after Property 6 has been satisfied.
  3. For each prepare message received, the acceptor replies with promise. This is after Properties 6 & 7 have been satisfied.
  4. For each propose message received, the acceptor replies with accept after updating its last accepted proposal. This is after Properties 6 & 7 have been satisfied.
  5. Last promised epoch and last accepted proposal are persistent and only updated by Properties 7 & 9

classic paxos的改进

  1. 对于epoch号比较小的prepare或者proposal,acceptor不会做响应,只能由proposer等待超时并重试,此处可以改进为acceptor发送no—prepare或者no-accept消息。
  2. 在prepare阶段,如果proposer收到的大部分ack中的value和第二阶段的proposal value相同,则可以跳过第二阶段,直接返回value?算法如何知道确定的值就是第二阶段需要propose的值呢?
  3. 在一个acceptor收到确认消息后,就返回decided给proposer,proposer不必等大多数acceptor返回确认就可以直接返回确认值。(这样做真的没问题吗)Mencius

proposal copying

(Unique proposer uses each epoch)一个epoch内只有一个proposal能被接受,若proposer在phase one接受到了NACK中,发现max Epoch等于prepare中发送的epoch且已有了promised的值,则做proposal copying并跳到phase 2(为了更快的收敛?)

Raft

  1. 三种状态
    1. leader
    2. follower
    3. candidate
  2. 每个server存储一个current term number
  3. 使用RPC通信:
    1. RequestVote RPC (initiated by candidates during elections)
    2. Append_entries RPCS(initiated by leaders to replicate log entries and to provide a form of heartbeat
  4. leader election
    1. 时机:只要follower能从leader或者candidate收到rpc,remain 状态,如果在election timeout时间内没有收到rpc,认为leader死亡,开始leader election
    2. 选举流程:follower increments current term(term += 1) 并且转换为candidate状态,vote for self并且并行的发送requestVote RPC给集群中的其他server
    3. candidate保持状态直到:
      1. 赢得选举
      2. 其他server赢得选举
      3. 一段时间过去了没有胜出者
    4. candidate赢得选举的条件:在 same term的条件下获取集群内大部分server的votes。一个server只能为一个candidate vote,first-come-first-serverd。一旦赢得选举则向其他的节点发送heartbeat to prevent new election
    5. 如何解决split votes(每个server同时开始选举并为自己投票)
      1. random election timeout(150-300ms)
    6. followers只为log entry大于自己的candidate投票
    7. candidate 收到 leader的request,比较其term和自己的term,若大于自己的则变为follower,否则reject request继续保持candidate状态
  5. Term(逻辑时钟)
    1. 特性:连续数字,每个server都会有一个term
    2. term changes whenever servers communitacte
    3. If server a‘s term smaller than b’s, a change it’s term to b’s
    4. candidate of leader founds larger term, trun itself to follower
    5. if server recieved stale term request , reject it
  6. Log replication
    1. 流程
      1. command come
      2. leader append command to it’s log
      3. Sent log to other servers to replicate the entry
      4. recieved most servers reply, leader apply this comand to it’s state machine
      5. return result to client
      6. (if followers crash or run slowlly, or network issues, leader send rpc infinitely until all followers store all log)
    2. log structure
      1. state machine command
      2. term number
        1. to detect inconsistencies between logs
      3. integer index
        1. to identify its position in the log
    3. commited log entries
      1. when ont entry has replicated it on a majority of servers
      2. leader includs highest index it knows to be commietted int appedEntries RPCs so other followers learns that and then commite that entry to its state machine
    4. how to handle inconsistencies?
      1. by forcing the follower’s logs to duplicate it’s own(follower overrited)
      2. leader 存储每个follower的nextIndex array,此array初始值为last one of leader index + 1,然后向follower发送AppendEntries RPC用于指针探测,如果探测失败则以1步距回退,直到找到一个agree point。(Leader Append-Only确保了leader不会删除或者覆盖他自己的log)
    5. log被复制到大多数节点上之后leader crush但是log未commit,仍有可能被覆盖掉
    6. Raft 通过 up-to-date vote 来确保新leader有所有commited logs
      1. if tyhe logs have last entries with different terms, the the log with the later term is more up-to-date
      2. if the logs end up with the same term ,then whicheve log is longer is more up-to-date
  7. Timing and availability
    1. broadcastTIme(heartBeat) << electionTimeOut<<MTBF
  8. Raft 图解
    1. http://thesecretlivesofdata.com/raft/
    2. https://raft.github.io

Chain replication

  1. request processing
    1. reply generation:
      1. reply is generated and sent by tail
    2. Query processing
      1. Query processed by tail
    3. Update processing
      1. update processed by head and delevered on chain
  2. Advantages & disadvantages
    1. Read:cheap beacause only tail reply the query
    2. Write:more heavy beacause all nodes need to participate, but compution only need once because head compute the value and other nodes only need to do once write.
  3. Coping with server failures
    1. 直接删掉?
  4. ……(没看,失去兴趣)
  5. 参考:https://www.dazhuanlan.com/andyblack/topics/1098082

FaceBook F4 system

  1. Design details

    1. Volumes:

      1. state:
        1. Unlocked: under 100GB, allow read/write/delete
        2. Locked: only read/delete is allowed
      2. 类别:
        1. dataFile: BLOB + metadata(key,size or checksum)
        2. index FIle:aimed to allow rebuild when rebooting
        3. Journal File:tracks BLOBS that have been deleted
    2. System overall

      1. 路由
        1. Create to Hot storage
        2. delete on hot or warm storage
        3. read on cache or hot or warm storage
      2. controller
        1. provision new machines
        2. Maintain pool of unlocked volumes
        3. ensure all logical volumes have enough physical volumes backing them
        4. create new physical volumes if necessary
        5. perform compaction and garbage clean
      3. Router tier
        1. 存储了逻辑volume到物理volume的映射
      4. Transformer Tier
        1. 讲计算和存储分离,计算节点用来计算,存储节点仅仅用来存取数据

      ……..(未读完)

可靠性策略

  1. 副本策略:不同节点/不同机架/不同DC
  2. 一致性hash
  3. CRUSH(Controlled Replication Under Scalable Hashing):CRUSH 算法的设置目的是使数据能够根据设备的存储能力和宽带资源加权平均地分布,并保持一个相对的概率平衡。 副本放置在具有层次结构的存储设备中,这对数据安全也有重要影响。 通过反射系统的物理安装组织,CRUSH算法可以将系统模块化,从而定位潜在的设备故障。
  4. EC(Erasure Code) 纠删码
    1. 特点:
      1. 低冗余,高磁盘利用率
      2. 数据恢复代价高
      3. 数据更新代价高

分布式事务协议

  1. 2PC
    1. 阶段:
      1. Prepare(询问):master发送prepare给参与节点,参与节点执行事务中的操作,并返回yes or no给master
      2. Commit(提交或者Abort):master 收到所有节点的yes信息,则进入commit流程,发送commit给所有的参与节点。若收到任意节点的no,则进行abort流程,参与者返回执行结果给master
    2. 存在的问题:
      1. 同步阻塞问题:若参与者共享同步资源,参与者访问临界资源存在阻塞
      2. 协调者故障导致参与者长期阻塞
      3. 数据不一致:协调者在发送commit阶段故障,部分参与者收到了commit
      4. 协调者发送commit的时候宕机,唯一收到此消息的参与者此时也宕机,事务状态未可知
  2. 3PC
    1. 阶段:
      1. can-commit
        1. 协调者询问参与者是否可以commit
        2. 参与者回复yes or no
      2. pre-commit
        1. 如果参与者全都是yes,则协调者执行:
          1. 协调者发送预提交请求
          2. 参与者预提交,记录undo和redo日志,锁定记录,返回执行响应
        2. 如果任意一个参与者发送了no或者等待超时,协调者执行:
          1. 发送abort请求
          2. 参与者收到abort或者等待超时执行中断
      3. do-commit
        1. 收到全部回复都是yes:
          1. 发送commit
          2. 参与者提交事务,释放资源
          3. 参与者回复响应
          4. 协调者收到全部响应,事务完成
        2. 收到任意no
          1. 发送abort
          2. 参与者回滚,释放资源
          3. 参与者回复响应
          4. 协调者收到所有回复,abort完成
    2. 如何解决协调者超时?
      1. 超时机制
    3. 如何解决同步阻塞?
      1. 无法解决
    4. 如何解决不一致?
      1. 无法解决
    5. 相较于2pc的优点?
      1. 超时机制一定程度上解决协调者宕机问题
      2. 第一阶段一分为二,canCommit阶段尽早可以判断事务是否可以执行,占用资源少,提高了吞吐量。