Skip to main content

· 17 min read

Seata is an open-source distributed transaction solution with over 24000 stars and a highly active community. It is dedicated to providing high-performance and user-friendly distributed transaction services in microservices architecture.

Currently, Seata's distributed transaction data storage modes include file, db, and redis. This article focuses on the architecture, deployment and usage, benchmark comparison of Seata-Server Raft mode. It explores why Seata needs Raft and provides insights into the process from research and comparison to design, implementation, and knowledge accumulation.

Presenter: Jianbin Chen(funkye) github id: funky-eyes

2. Architecture Introduction

2.1 What is Raft Mode?

Firstly, it is essential to understand what the Raft distributed consensus algorithm is. The following excerpt is a direct quote from the official documentation of sofa-jraft:

RAFT is a novel and easy-to-understand distributed consensus replication protocol proposed by Diego Ongaro and John Ousterhout at Stanford University. It serves as the central coordination component in the RAMCloud project. Raft is a Leader-Based variant of Multi-Paxos, providing a more complete and clear protocol description compared to protocols like Paxos, Zab, View Stamped Replication. It also offers clear descriptions of node addition and deletion. As a replication state machine, Raft is the most fundamental component in distributed systems, ensuring ordered replication and execution of commands among multiple nodes, guaranteeing consistency when the initial states of multiple nodes are consistent.

In summary, Seata's Raft mode is based on the Sofa-Jraft component, implementing the ability to ensure the data consistency and high availability of Seata-Server itself.

2.2 Why Raft Mode is Needed

After understanding the definition of Seata-Raft mode, you might wonder whether Seata-Server is now unable to ensure consistency and high availability. Let's explore how Seata-Server currently achieves this from the perspectives of consistency and high availability.

2.2.1 Existing Storage Modes

In the current Seata design, the role of the Server is to ensure the correct execution of the two-phase commit for transactions. However, this depends on the correct storage of transaction records. To ensure that transaction records are not lost, it is necessary to drive all Seata-RM instances to perform the correct two-phase commit behavior while maintaining correct state. So, how does Seata currently store transaction states and records?

Firstly, let's introduce the three transaction storage modes supported by Seata: file, db, and redis. In terms of consistency ranking, the db mode provides the best guarantee for transaction records, followed by the asynchronous flushing of the file mode, and finally the aof and rdb modes of redis.

To elaborate:

  • The file mode is Seata's self-implemented transaction storage method. It stores transaction information on the local disk in a sequential write manner. For performance considerations, it defaults to asynchronous mode and stores transaction information in memory to ensure consistency between memory and disk data. In the event of Seata-Server (TC) unexpected crash, it reads transaction information from the disk upon restarting and restores it to memory for the continuation of transaction contexts.

  • The db mode is another implementation of Seata's abstract transaction storage manager (AbstractTransactionStoreManager). It relies on databases such as PostgreSQL, MySQL, Oracle, etc., to perform transaction information operations. Consistency is guaranteed by the local transactions of the database, and data persistence is the responsibility of the database.

  • Redis, similar to db, is a transaction storage method using Jedis and Lua scripts. It performs transaction operations using Lua scripts, and in Seata 2.x, all operations (such as lock competition) are handled using Lua scripts. Data storage is similar to db, relying on the storage side (Redis) to ensure data consistency. Like db, redis adopts a computation and storage separation architecture design in Seata.

2.2.2 High Availability

High availability is simply the ability of a cluster to continue running normally after the main node crashes. The common approach is to deploy multiple nodes providing the same service and use a registry center to real-time sense the online and offline status of the main node for timely switching to an available node.

It may seem that deploying a few more machines is all that's needed. However, there is a problem behind it – how to ensure that multiple nodes operate as a whole. If one node crashes, another node can seamlessly take over the work of the crashed node, including handling the data of the crashed node. The answer to solving this problem is simple: in a computation and storage separation architecture, store data in a shared middleware. Any node can access this shared storage area to obtain transaction information for all nodes' operations, thus achieving high availability.

However, the prerequisite is that computation and storage must be separated. Why is the integration of computation and storage not feasible? This brings us to the implementation of the File mode. As described earlier, the File mode stores data on local disks and node memory, with no synchronization in data writing operations. This means that the current File mode cannot achieve high availability and only supports single-machine deployment. For basic quick start and simple use, the File mode has lower applicability, and the high-performance, memory-based File mode is practically no longer used in production environments.

2.3 How is Seata-Raft Designed?

2.3.1 Design Principles

The design philosophy of Seata-Raft mode is to encapsulate the File mode, which is unable to achieve high availability, and use the Raft algorithm to synchronize data between multiple TCs. This mode ensures data consistency among multiple TCs when using the File mode and replaces asynchronous flushing operations with Raft logs and snapshots for data recovery.

flow

In the Seata-Raft mode, the client-side, upon startup, retrieves its transaction group (e.g., default) and the IP addresses of relevant Raft cluster nodes from the configuration center. By sending a request to the control port of Seata-Server, the client can obtain metadata for the Raft cluster corresponding to the default group, including leader, follower, and learner member nodes. Subsequently, the client monitors (watches) any member nodes of non-leader nodes.

Assuming that TM initiates a transaction, and the leader node in the local metadata points to the address of TC1, TM will only interact with TC1. When TC1 adds global transaction information, through the Raft protocol, denoted as step 1 in the diagram, TC1 sends the log to other nodes. Step 2 represents the response of follower nodes to log reception. When more than half of the nodes (such as TC2) accept and respond successfully, the state machine (FSM) on TC1 will execute the action of adding a global transaction.

watch watch2

If TC1 crashes or a reelection occurs, what happens? Since the metadata has been obtained during the initial startup, the client will execute the watch follower node's interface to update the local metadata information. Therefore, subsequent transaction requests will be sent to the new leader (e.g., TC2). Meanwhile, TC1's data has already been synchronized to TC2 and TC3, ensuring data consistency. Only at the moment of the election, if a transaction happens to be sent to the old leader, it will be actively rolled back to ensure data correctness.

It is important to note that in this mode, if a transaction is in the phase of sending resolution requests or the one-phase process has not yet completed at the moment of the election, and it happens exactly during the election, these transactions will be actively rolled back. This is because the RPC node has crashed or a reelection has occurred, and there is currently no implemented RPC retry. The TM side has a default retry mechanism of 5 times, but due to the approximately 1s-2s time required for the election, transactions in the 'begin' state may not successfully resolve, so they are prioritized for rollback to release locks, avoiding impacting the correctness of other business.

2.3.2 Fault Recovery

In Seata, when a TC experiences a failure, the data recovery process is as follows:

recover

As shown in the above diagram:

  • Check for the Latest Data Snapshot: Firstly, the system checks for the existence of the latest data snapshot file. The data snapshot is a one-time full copy of the in-memory data state. If there is a recent data snapshot, the system directly loads it into memory.

  • Replay Based on Raft Logs After Snapshot: If there is the latest snapshot or no snapshot file, the system replays the data based on the previously recorded Raft logs. Each request in Seata-Server ultimately goes through the ServerOnRequestProcessor for processing, then moves to the specific coordinator class (DefaultCoordinator or RaftCoordinator), and further proceeds to the specific business code (DefaultCore) for the corresponding transaction processing (e.g., begin, commit, rollback).

  • After the log replay is complete, the leader initiates log synchronization and continues to execute the related transaction's add, delete, and modify actions.

Through these steps, Seata can achieve data recovery after a failure. It first attempts to load the latest snapshot, if available, to reduce replay time. Then, it replays based on Raft logs to ensure the consistency of data operations. Finally, through the log synchronization mechanism, it ensures data consistency among multiple nodes.

2.3.3 Business Processing Synchronization Process

flow For the case where the client side is obtaining the latest metadata while a business thread is executing operations such as begin, commit, or registry, Seata adopts the following handling:

  • On the client side:

    • If the client is executing operations like begin, commit, or registry, and at this moment, it needs to obtain the latest metadata, the RPC request from the client might fail since the leader may no longer exist or is not the current leader.
    • If the request fails, the client receives an exception response, and in this case, the client needs to roll back based on the request result.
  • TC side for detecting the old leader:

    • On the TC side, if the client's request reaches the old leader node, TC checks if it is the current leader. If it is not the leader, it rejects the request.
    • If it is the leader but fails midway, such as failing during the process of submitting a task to the state machine, the creation of the task (createTask) fails due to the current state not being the leader. In this case, the client also receives a response with an exception.
    • The old leader's task submission also fails, ensuring the consistency of transaction information.

Through the above handling, when the client obtains the latest metadata while a business operation is in progress, Seata ensures data consistency and transaction correctness. If the client's RPC request fails, it triggers a rollback operation. On the TC side, detection of the old leader and the failure of task submission prevent inconsistencies in transaction information. This way, the client's data can also maintain consistency.

3. Usage and Deployment

In terms of usage and deployment, the community adheres to the principles of minimal intrusion and minimal changes. Therefore, the overall deployment should be straightforward. The following sections introduce deployment changes separately for the client and server sides.

3.1 Client

Firstly, those familiar with the use of registry configuration centers should be aware of the seata.registry.type configuration item in Seata's configuration, supporting options like Nacos, ZooKeeper, etcd, Redis, etc. After version 2.0, a configuration item for Raft was added.

   registry:
type: raft
raft:
server-addr: 192.168.0.111:7091, 192.168.0.112:7091, 192.168.0.113:7091

Switch the registry.type to 'raft' and configure the address for obtaining Raft-related metadata, which is unified as the IP of the seata-server + HTTP port. Then, it is essential to configure the traditional transaction group.

seata:
tx-service-group: default_tx_group
service:
vgroup-mapping:
default_tx_group: default

If the current transaction group used is default_tx_group, then the corresponding Seata cluster/group is 'default'. There is a corresponding relationship, and this will be further explained in the server deployment section. With this, the changes on the client side are complete.

3.2 Server

For server-side changes, there might be more adjustments, involving familiarity with some tuning parameters and configurations. Of course, default values can be used without any modifications.

seata:
server:
raft:
group: default # This value represents the group of this raft cluster, and the value corresponding to the client's transaction group should match it.
server-addr: 192.168.0.111:9091,192.168.0.112:9091,192.168.0.113:9091 # IP and port of the 3 nodes, the port is the netty port of the node + 1000, default netty port is 8091
snapshot-interval: 600 # Take a snapshot every 600 seconds for fast rolling of raftlog. However, making a snapshot every 600 seconds may cause business response time jitter if there is too much transaction data in memory. But it is friendly for fault recovery and faster node restart. You can adjust it to 30 minutes, 1 hour, etc., according to the business. You can test whether there is jitter on your own, and find a balance point between rt jitter and fault recovery.
apply-batch: 32 # At most, submit raftlog once for 32 batches of actions
max-append-bufferSize: 262144 # Maximum size of the log storage buffer, default is 256K
max-replicator-inflight-msgs: 256 # In the case of enabling pipeline requests, the maximum number of in-flight requests, default is 256
disruptor-buffer-size: 16384 # Internal disruptor buffer size. If it is a scenario with high write throughput, you need to appropriately increase this value. Default is 16384
election-timeout-ms: 1000 # How long without a leader's heartbeat to start a new election
reporter-enabled: false # Whether the monitoring of raft itself is enabled
reporter-initial-delay: 60 # Interval of monitoring
serialization: jackson # Serialization method, do not change
compressor: none # Compression method for raftlog, such as gzip, zstd, etc.
sync: true # Flushing method for raft log, default is synchronous flushing
config:
# support: nacos, consul, apollo, zk, etcd3
type: file # This configuration can choose different configuration centers
registry:
# support: nacos, eureka, redis, zk, consul, etcd3, sofa
type: file # Non-file registration center is not allowed in raft mode
store:
# support: file, db, redis, raft
mode: raft # Use raft storage mode
file:
dir: sessionStore # This path is the storage location of raftlog and related transaction logs, default is relative path, it is better to set a fixed location

In 3 or more nodes of seata-server, after configuring the above parameters, you can directly start it, and you will see similar log output, which means the cluster has started successfully:

2023-10-13 17:20:06.392  WARN --- [Rpc-netty-server-worker-10-thread-1] [com.alipay.sofa.jraft.rpc.impl.BoltRaftRpcFactory] [ensurePipeline] []: JRaft SET bolt.rpc.dispatch-msg-list-in-default-executor to be false for replicator pipeline optimistic.
2023-10-13 17:20:06.439 INFO --- [default/PeerPair[10.58.16.231:9091 -> 10.58.12.217:9091]-AppendEntriesThread0] [com.alipay.sofa.jraft.storage.impl.LocalRaftMetaStorage] [save] []: Save raft meta, path=sessionStore/raft/9091/default/raft_meta, term=4, votedFor=0.0.0.0:0, cost time=25 ms
2023-10-13 17:20:06.441 WARN --- [default/PeerPair[10.58.16.231:9091 -> 10.58.12.217:9091]-AppendEntriesThread0] [com.alipay.sofa.jraft.core.NodeImpl] [handleAppendEntriesRequest] []: Node <default/10.58.16.231:9091> reject term_unmatched AppendEntriesRequest from 10.58.12.217:9091, term=4, prevLogIndex=4, prevLogTerm=4, localPrevLogTerm=0, lastLogIndex=0, entriesSize=0.
2023-10-13 17:20:06.442 INFO --- [JRaft-FSMCaller-Disruptor-0] [io.seata.server.cluster.raft.RaftStateMachine] [onStartFollowing] []: groupId: default, onStartFollowing: LeaderChangeContext [leaderId=10.58.12.217:9091, term=4, status=Status[ENEWLEADER<10011>: Raft node receives message from new leader with higher term.]].
2023-10-13 17:20:06.449 WARN --- [default/PeerPair[10.58.16.231:9091 -> 10.58.12.217:9091]-AppendEntriesThread0] [com.alipay.sofa.jraft.core.NodeImpl] [handleAppendEntriesRequest] []: Node <default/10.58.16.231:9091> reject term_unmatched AppendEntriesRequest from 10.58.12.217:9091, term=4, prevLogIndex=4, prevLogTerm=4, localPrevLogTerm=0, lastLogIndex=0, entriesSize=0.
2023-10-13 17:20:06.459 INFO --- [Bolt-default-executor-4-thread-1] [com.alipay.sofa.jraft.core.NodeImpl] [handleInstallSnapshot] []: Node <default/10.58.16.231:9091> received InstallSnapshotRequest from 10.58.12.217:9091, lastIncludedLogIndex=4, lastIncludedLogTerm=4, lastLogId=LogId [index=0, term=0].
2023-10-13 17:20:06.489 INFO --- [Bolt-conn-event-executor-13-thread-1] [com.alipay.sofa.jraft.rpc.impl.core.ClientServiceConnectionEventProcessor] [onEvent] []: Peer 10.58.12.217:9091 is connected
2023-10-13 17:20:06.519 INFO --- [JRaft-Group-Default-Executor-0] [com.alipay.sofa.jraft.util.Recyclers] [<clinit>] []: -Djraft.recyclers.maxCapacityPerThread: 4096.
2023-10-13 17:20:06.574 INFO --- [JRaft-Group-Default-Executor-0] [com.alipay.sofa.jraft.storage.snapshot.local.LocalSnapshotStorage] [destroySnapshot] []: Deleting snapshot sessionStore/raft/9091/default/snapshot/snapshot_4.
2023-10-13 17:20:06.574 INFO --- [JRaft-Group-Default-Executor-0] [com.alipay.sofa.jraft.storage.snapshot.local.LocalSnapshotStorage] [close] []: Renaming sessionStore/raft/9091/default/snapshot/temp to sessionStore/raft/9091/default/snapshot/snapshot_4.
2023-10-13 17:20:06.689 INFO --- [JRaft-FSMCaller-Disruptor-0] [io.seata.server.cluster.raft.snapshot.session.SessionSnapshotFile] [load] []: on snapshot load start index: 4
2023-10-13 17:20:06.694 INFO --- [JRaft-FSMCaller-Disruptor-0] [io.seata.server.cluster.raft.snapshot.session.SessionSnapshotFile] [load] []: on snapshot load end index: 4
2023-10-13 17:20:06.694 INFO --- [JRaft-FSMCaller-Disruptor-0] [io.seata.server.cluster.raft.RaftStateMachine] [onSnapshotLoad] []: groupId: default, onSnapshotLoad cost: 110 ms.
2023-10-13 17:20:06.694 INFO --- [JRaft-FSMCaller-Disruptor-0] [io.seata.server.cluster.raft.RaftStateMachine] [onConfigurationCommitted] []: groupId: default, onConfigurationCommitted: 10.58.12.165:9091,10.58.12.217:9091,10.58.16.231:9091.
2023-10-13 17:20:06.705 INFO --- [JRaft-FSMCaller-Disruptor-0] [com.alipay.sofa.jraft.storage.snapshot.SnapshotExecutorImpl] [onSnapshotLoadDone] []: Node <default/10.58.16.231:9091> onSnapshotLoadDone, last_included_index: 4
last_included_term: 4
peers: "10.58.12.165:9091"
peers: "10.58.12.217:9091"
peers: "10.58.16.231:9091"

2023-10-13 17:20:06.722 INFO --- [JRaft-Group-Default-Executor-1] [com.alipay.sofa.jraft.storage.impl.RocksDBLogStorage] [lambda$truncatePrefixInBackground$2] []: Truncated prefix logs in data path: sessionStore/raft/9091/default/log from log index 1 to 5, cost 0 ms.

3.3 faq

  • Once the seata.raft.server-addr is configured, cluster scaling or shrinking must be done through the server's openapi. Directly changing this configuration and restarting won't take effect. The API for this operation is /metadata/v1/changeCluster?raftClusterStr=new_cluster_list.

  • If the addresses in server-addr: are all on the local machine, you need to add a 1000 offset to the netty ports of different servers on the local machine. For example, if server.port: 7092, the netty port will be 8092, and the raft election and communication port will be 9092. You need to add the startup parameter -Dserver.raftPort=9092. On Linux, this can be specified using export JAVA_OPT="-Dserver.raftPort=9092".

4. Performance Test Comparison

Performance testing is divided into two scenarios. To avoid data hotspots and thread optimization, the client side initializes 3 million items and uses jdk21 virtual threads + Spring Boot3 + Seata AT for testing. Garbage collection is handled with the ZGC generational garbage collector. The testing tool used is Alibaba Cloud PTS. Server-side is uniformly configured with jdk21 (not yet adapted for virtual threads). Server configurations are as follows:

  • TC: 4c8g * 3

  • Client: 4c * 8G * 1

  • Database: Alibaba Cloud RDS 4c16g

  • 64 concurrent performance test only increases the performance of the @GlobalTransactional annotated interface with empty submissions.

  • Random 3 million data items are used for inventory deduction in a 32 concurrent scenario for 10 minutes.

4.1 1.7.1 db mode

raft pressure test model

Empty submission 64C

db64-2

Random inventory deduction 32C

db32-2

4.2 2.0 raft mode

raft pressure test model

Empty submission 64C

raft64-2

Random inventory deduction 32C

raft32c-2

4.3 Test Result Comparison

32 concurrent random inventory deduction scenario with 3 million items

tps avgtps maxcountrterrorStorage Type
1709 (42%↑)2019 (21%↑)1228803 (42%↑)13.86ms (30%↓)0Raft
1201166886410519.86ms0DB

64 concurrent empty pressure on @GlobalTransactional interface (test peak limit is 8000)

tps avgtps maxcountrterrorStorage Type
5704 (20%↑)8062 (30%↑)4101236 (20%↑)7.79ms (19%↓)0Raft
4743617234102409.65ms0DB

In addition to the direct comparison of the above data, by observing the curves of the pressure test, it can be seen that under the raft mode, TPS and RT are more stable, with less jitter, and better performance and throughput.

5. Summary

In the future development of Seata, performance, entry threshold, and deployment and operation costs are directions that we need to pay attention to and continuously optimize. After the introduction of the raft mode, Seata has the following characteristics:

  1. In terms of storage, after the separation of storage and computation, Seata's upper limit for optimization has been raised, making it more self-controlled.
  2. Lower deployment costs, no need for additional registration centers, storage middleware.
  3. Lower entry threshold, no need to learn other knowledge such as registration centers; one can directly use Seata Raft.

In response to industry trends, some open-source projects such as ClickHouse and Kafka have started to abandon the use of ZooKeeper and instead adopt self-developed solutions, such as ClickKeeper and KRaft. These solutions ensure the storage of metadata and other information by themselves, reducing the need for third-party dependencies, thus reducing operational and learning costs. These features are mature and worth learning from.

Of course, currently, solutions based on the Raft mode may not be mature enough and may not fully meet the beautiful descriptions above. However, precisely because of such theoretical foundations, the community should strive in this direction, gradually bringing practice closer to the theoretical requirements. Here, all students interested in Seata are welcome to join the community, contributing to the development of Seata!

· 14 min read

This article mainly introduces the evolutionary journey of distributed transactions from internal development to commercialization and open source, as well as the current progress and future planning of the Seata community. Seata is an open-source distributed transaction solution designed to provide a comprehensive solution for distributed transactions under modern microservices architecture. Seata offers complete distributed transaction solutions, including AT, TCC, Saga, and XA transaction modes, supporting various programming languages and data storage schemes. Seata also provides easy-to-use APIs, extensive documentation, and examples to facilitate quick development and deployment for enterprises applying Seata. Seata's advantages lie in its high availability, high performance, and high scalability, and it does not require extra complex operations for horizontal scaling. Seata is currently used in thousands of customer business systems on Alibaba Cloud, and its reliability has been recognized and applied by major industry manufacturers. As an open-source project, the Seata community is also expanding continuously, becoming an important platform for developers to exchange, share, and learn, attracting more and more attention and support from enterprises. Today, I will primarily share about Seata on the following three topics:

  • From TXC/GTS to Seata
  • Latest developments in the Seata community
  • Future planning for the Seata community

From TXC/GTS to Seata

The Origin of Distributed Transactions

Product Matrix Seata is internally codenamed TXC (taobao transaction constructor) within Alibaba, a name with a strong organizational structure flavor. TXC originated from Alibaba's Wushi (Five Color Stones) project, which in ancient mythology were the stones used by the goddess Nüwa to mend the heavens, symbolizing Alibaba's important milestone in the evolution from monolithic architecture to distributed architecture. During this project, a batch of epoch-making Internet middleware was developed, including the well-known "Big Three":

  • HSF service invocation framework Solves service communication issues after the transition from monolithic applications to service-oriented architectures.
  • TDDL database sharding framework Addresses storage capacity and connection count issues of databases at scale.
  • MetaQ messaging framework Addresses asynchronous invocation issues. The birth of the Big Three satisfied the basic requirements of microservices-based business development, but the data consistency issues that arose after microservices were not properly addressed, lacking a unified solution. The likelihood of data consistency issues in microservices is much higher than in monolithic applications, and the increased complexity of moving from in-process calls to network calls exacerbates the production of exceptional scenarios. The increase in service hops also makes it impossible for upstream and downstream services to coordinate data rollback in the event of a business processing exception. TXC was born to address the pain points of data consistency at the application architecture layer, and the core data consistency scenarios it aimed to address included:
  • Consistency across services. Coordinates rollback of upstream and downstream service nodes in the event of system exceptions such as call timeouts and business exceptions.
  • Data consistency in database sharding. Ensures internal transactions during logical SQL operations on business layers are consistent across different data shards.
  • Data consistency in message sending. Addresses the inconsistency between data operations and successful message sending. To overcome the common scenarios encountered, TXC was seamlessly integrated with the Big Three. When businesses use the Big Three for development, they are completely unaware of TXC's presence in the background, do not have to consider the design of data consistency, and leave it to the framework to ensure, allowing businesses to focus more on their own development, greatly improving development efficiency.
    GTS Architecture TXC has been widely used within Alibaba Group for many years and has been baptized by the surging traffic of large-scale events like Singles' Day, significantly improving business development efficiency and ensuring data accuracy, eliminating financial and reputational issues caused by data inconsistencies. With the continuous evolution of the architecture, a standard three-node cluster can now handle peak values of nearly 100K TPS and millisecond-level transaction processing. In terms of availability and performance, it has reached a four-nines SLA guarantee, ensuring no failures throughout the year even in unattended conditions.

The Evolution of Distributed Transactions

The birth of new things is always accompanied by doubts. Is middleware capable of ensuring data consistency reliable? The initial birth of TXC was just a vague theory, lacking theoretical models and engineering practice. After we conducted MVP (Minimum Viable Product) model testing and promoted business deployment, we often encountered faults and frequently had to wake up in the middle of the night to deal with issues, wearing wristbands to sleep to cope with emergency responses. These were the most painful years I went through technically after taking over the team. Evolution of Distributed Transactions Subsequently, we had extensive discussions and systematic reviews. We first needed to define the consistency problem. Were we to achieve majority consensus consistency like RAFT, solve database consistency issues like Google Spanner, or something else? Looking at the top-down layered structure from the application node, it mainly includes development frameworks, service invocation frameworks, data middleware, database drivers, and databases. We had to decide at which layer to solve the data consistency problem. We compared the consistency requirements, universality, implementation complexity, and business integration costs faced when solving data consistency issues at different levels. In the end, we weighed the pros and cons, decided to keep the implementation complexity to ourselves, and adopted the AT mode initially as a consistency component. We needed to ensure high consistency, but not be locked into specific database implementations, ensuring the generality of scenarios and the business integration costs were low enough to be easily implemented. This is also why TXC initially adopted the AT mode. A distributed transaction is not just a framework; it's a system. We defined the consistency problem in theory, abstractly conceptualized modes, roles, actions, and isolation, etc. From an engineering practice perspective, we defined the programming model, including low-intrusion annotations, simple method templates, and flexible APIs, and defined basic and enhanced transaction capabilities (e.g., how to support a large number of activities at low cost), as well as capabilities in operations, security, performance, observability, and high availability. Transaction Logical Model What problems do distributed transactions solve? A classic and tangible example is the money transfer scenario. The transfer process includes subtracting balance and adding balance, how do we ensure the atomicity of the operation? Without any intervention, these two steps may encounter various problems, such as account B being canceled or service call timeouts, etc. Timeout issues have always been a difficult problem to solve in distributed applications; we cannot accurately know whether service B has executed and in what order. From a data perspective, this means the money in account B may not be successfully added. After the service-oriented transformation, each node only has partial information, while the transaction itself requires global coordination of all nodes, thus requiring a centralized role with a god's-eye view, capable of obtaining all information, which is the TC (transaction coordinator), used to globally coordinate the transaction state. The TM (Transaction Manager) is the role that drives the generation of transaction proposals. However, even gods nod off, and their judgments are not always correct, so we need an RM (resource manager) role to verify the authenticity of the transaction as a representative of the soul. This is TXC's most basic philosophical model. We have methodologically verified that its data consistency is very complete, of course, our cognition is bounded. Perhaps the future will prove we were turkey engineers, but under current circumstances, its model is already sufficient to solve most existing problems. Distributed Transaction Performance After years of architectural evolution, from the perspective of transaction single-link latency, TXC takes an average of about 0.2 milliseconds to process at the start of the transaction and about 0.4 milliseconds for branch registration, with the entire transaction's additional latency within the millisecond range. This is also the theoretical limit value we have calculated. In terms of throughput, the TPS of a single node reaches 30,000 times/second, and the TPS of a standard cluster is close to 100,000 times/second.


Seata Open Source

Why go open source? This is a question many people have asked me. In 2017, we commercialized the GTS (Global Transaction Service) product sold on Alibaba Cloud, with both public and private cloud forms. At this time, the internal group developed smoothly, but we encountered various problems in the process of commercialization. The problems can be summed up in two main categories: First, developers are quite lacking in the theory of distributed transactions, most people do not even understand what local transactions are, let alone distributed transactions. Second, there are problems with product maturity, often encountering various strange scenario issues, leading to a sharp rise in support and delivery costs, and R&D turning into after-sales customer service. We reflected on why we encountered so many problems. The main issue here is that Alibaba Group internally has a unified language stack and unified technology stack, and our polishing of specific scenarios is very mature. Serving Alibaba, one company, and serving thousands of enterprises on the cloud is fundamentally different, which also made us realize that our product's scenario ecology was not well developed. On GitHub, more than 80% of open-source software is basic software, and basic software primarily solves the problem of scenario universality, so it cannot be locked in by a single enterprise, like Linux, which has a large number of community distributions. Therefore, in order to make our product better, we chose to open source and co-build with developers to popularize more enterprise users. Alibaba Open Source Alibaba's open-source journey has gone through three main stages. The first stage is the stage where Dubbo is located, where developers contribute out of love, Dubbo has been open sourced for over 10 years, and time has fully proven that Dubbo is an excellent open-source software, and its microkernel plugin extensibility design is an important reference for me when I initially open sourced Seata. When designing software, we need to consider which is more important between extensibility and performance, whether we are doing a three-year design, a five-year design, or a ten-year design that meets business development. While solving the 0-1 service call problem, can we predict the governance problems after the 1-100 scale-up? The second stage is the closed loop of open source and commercialization, where commercialization feeds back into the open-source community, promoting the development of the open-source community. I think cloud manufacturers are more likely to do open source well for the following reasons:

  • First, the cloud is a scaled economy, which must be established on a stable and mature kernel foundation, packaging its product capabilities including high availability, maintenance-free, and elasticity on top of it. An unstable kernel will inevitably lead to excessive delivery and support costs, and high penetration of the R&D team's support Q&A will prevent large-scale replication, and high penetration rates will prevent rapid evolution and iteration of products.
  • Second, commercial products know business needs better. Our internal technical teams often YY requirements from a development perspective, and what they make is not used by anyone, and thus does not form a value conversion. The business requirements collected through commercialization are all real, so its open source kernel must also evolve in this direction. Failure to evolve in this direction will inevitably lead to architectural splits on both sides, increasing the team's maintenance costs.
  • Finally, the closed loop of open source and commercialization can promote better development of both parties. If the open-source kernel often has various problems, would you believe that its commercial product is good enough? The third stage is systematization and standardization. First, systematization is the basis of open-source solutions. Alibaba's open-source projects are mostly born out of internal e-commerce scenario practices. For example, Higress is used to connect Ant Group's gateways; Nacos carries services with millions of instances and tens of millions of connections; Sentinel provides degradation and throttling capabilities for high availability during major promotions; and Seata ensures transaction data consistency. This set of systematized open-source solutions is designed based on the best practices of Alibaba's e-commerce ecosystem. Second, standardization is another important feature. Taking OpenSergo as an example, it is both a standard and an implementation. In the past few years, the number of domestic open-source projects has exploded. However, the capabilities of various open-source products vary greatly, and many compatibility issues arise when integrating with each other. Therefore, open-source projects like OpenSergo can define some standardized capabilities and interfaces and provide some implementations, which will greatly help the development of the entire open-source ecosystem.

Latest Developments in the Seata Community

Introduction to the Seata Community

Community Introduction At present, Seata has open-sourced 4 transaction modes, including AT, TCC, Saga, and XA, and is actively exploring other viable transaction solutions. Seata has integrated with more than 10 mainstream RPC frameworks and relational databases, and has integrated or been integrated relationships with more than 20 communities. In addition, we are also exploring languages other than Java in the multi-language system, such as Golang, PHP, Python, and JS. Seata has been applied to business systems by thousands of customers. Seata applications have become more mature, with successful cooperation with the community in the financial business scenarios of CITIC Bank and Everbright Bank, and successfully adopted into core accounting systems. The landing of microservices systems in financial scenarios is very stringent, which also marks a new level of maturity for Seata's kernel.


Seata Ecosystem Expansion

Ecosystem Expansion Seata adopts a microkernel and plugin architecture design, exposing rich extension points in APIs, registry configuration centers, storage modes, lock control, SQL parsers, load balancing, transport, protocol encoding and decoding, observability, and more. This allows businesses to easily perform flexible extensions and select technical components.


Seata Application Cases

Application Cases Case 1: China Aviation Information's Air Travel Project The China Aviation Information Air Travel project introduced Seata in the 0.2 version to solve the data consistency problem of ticket and coupon business, greatly improving development efficiency, reducing asset losses caused by data inconsistency, and enhancing user interaction experience. Case 2: Didi Chuxing's Two-Wheeler Business Unit Didi Chuxing's Two-Wheeler Business Unit introduced Seata in version 0.6.1, solving the data consistency problem of business processes such as blue bicycles, electric vehicles, and assets, optimizing the user experience, and reducing asset loss. Case 3: Meituan's Infrastructure Meituan's infrastructure team developed the internal distributed transaction solution Swan based on the open-source Seata project, which is used to solve distributed transaction problems within Meituan's various businesses. Case 4: Hema Town Hema Town uses Seata to control the flower-stealing process in game interactions, significantly shortening the development cycle from 20 days to 5 days, effectively reducing development costs.


Evolution of Seata Transaction Modes

Mode Evolution


Current Progress of Seata

  • Support for Oracle and PostgreSQL multi-primary keys.
  • Support for Dubbo3.
  • Support for Spring Boot3.
  • Support for JDK 17.
  • Support for ARM64 images.
  • Support for multiple registration models.
  • Extended support for various SQL syntaxes.
  • Support for GraalVM Native Image.
  • Support for Redis lua storage mode.

Seata 2.x Development Planning

Development Planning Mainly includes the following aspects:

  • Storage/Protocol/Features Explore storage and computing separation in Raft cluster mode; better experience, unify the current 4 transaction mode APIs; compatible with GTS protocol; support Saga annotations; support distributed lock control; support data perspective insight and governance.
  • Ecosystem Support more databases, more service frameworks, while exploring support for the domestic trust creation ecosystem; support the MQ ecosystem; further enhance APM support.
  • Solutions In addition to supporting microservices ecosystems, explore multi-cloud solutions; closer to cloud-native solutions; add security and traffic protection capabilities; achieve self-convergence of core components in the architecture.
  • Multi-Language Ecosystem Java is the most mature in the multi-language ecosystem, continue to improve other supported programming languages, while exploring Transaction Mesh solutions that are independent of languages.
  • R&D Efficiency/Experience Improve test coverage, prioritize quality, compatibility, and stability; restructure the official website's documentation to improve the hit rate of document searches; simplify operations and deployment on the experience side, achieve one-click installation and metadata simplification; console supports transaction control and online analysis capabilities.

In one sentence, the 2.x plan is summarized as: Bigger scenarios, bigger ecosystems, from usable to user-friendly.


Contact Information for the Seata Community

Contact Information

· 8 min read

Introduction to Seata

Seata is the predecessor of Alibaba Group's massively used middleware for ensuring consistency of distributed transactions, and Seata is its open source product, maintained by the community. Before introducing Seata, let's discuss with you some problematic scenarios that we often encounter in the course of our business development.

Business Scenarios

Our business in the development process, basically from a simple application, and gradually transitioned to a huge scale, complex business applications. These complex scenarios inevitably encounter distributed transaction management problems, Seata's emergence is to solve these distributed scenarios of transaction management problems. Introduce a few of the classic scenarios:

Scenario 1: distributed transactions in the scenario of split library and split table

image.png Initially, our business was small and lightweight, and a single database was able to secure our data links. However, as the business scale continues to grow and the business continues to become more complex, usually a single database encounters bottlenecks in terms of capacity and performance. The usual solution is to evolve to a split-database, split-table architecture. At this point, that is, the introduction of **split library and split table scenario **distributed transaction scenarios.

Scenario 2: Distributed Transactions in a Cross-Service Scenario

image.png A solution to reduce the complexity of a monolithic application: application microservice splitting. After splitting, our product consists of multiple microservice components with different functions, each of which uses independent database resources. When it comes to data consistency scenarios involving cross-service calls, distributed transactions are introduced in cross-service scenarios.

Seata architecture

image.png Its core components are mainly as follows:

  • Transaction Coordinator (TC).

Transaction coordinator that maintains the running state of the global transaction and is responsible for coordinating and driving the commit or rollback of the global transaction.

  • Transaction Manager (TM)

Controls global transaction boundaries, responsible for opening a global transaction and ultimately initiating a global commit or global rollback resolution, TM defines the boundaries of a global transaction.

  • Resource Manager (RM)

Controls branch transactions and is responsible for branch registration, status reporting, and receiving commands from the Transaction Coordinator to drive the commit and rollback of branch (local) transactions.RM is responsible for defining the boundaries and behaviour of branch transactions.

Seata's observable practices

Why do we need observables?

  • *# Distributed transaction message links are more complex *#

Seata, while solving these problems of user ease of use and distributed transaction consistency, requires multiple interactions between TC and TM, RM, especially when the links of microservices become complex, the interaction links of Seata will also increase in positive correlation. In this case, we actually need to introduce observable capabilities to observe and analyse things links.

  • Anomalous links, hard to locate for troubleshooting, no way to optimise performance*

When troubleshooting Seata's abnormal transaction links, the traditional approach requires looking at logs, which is troublesome to retrieve. With the introduction of observable capabilities, it helps us to visually analyse links and quickly locate problems; providing a basis for optimising time-consuming transaction links.

  • Visualisation, Data Quantification

The visualisation capability allows users to have an intuitive feeling of transaction execution; with quantifiable data, it helps users to assess resource consumption and plan budgets.

Overview of observable capabilities

observable dimensionsseata desired capabilitiestechnology selection reference
MetricsFunctional dimensions: can be grouped and isolated by business, capture important metrics such as total number of transactions, elapsed time and so on.
Performance level: high volume performance, plug-ins loaded on-demand.
Architecture: Reduce third-party dependency, server-side and client-side can adopt a unified architecture to reduce technical complexity.
Compatibility level: at least compatible with Prometheus ecosystemPrometheus: industry-leading position in the field of metrics storage and query.
OpenTelemetry: the de facto standard for observable data collection and specification. It does not store, display, or analyse the data itselfTracingFunctionality level
TracingFunctionality: Full-link tracing of distributed transaction lifecycle, reacting to distributed transaction execution performance consumption.
Ease of use: simple and easy to access for users using seataSkyWalking: using Java's Agent probe technology, high efficiency, simple and easy to use.Logging
LoggingFunctionality: logging all lifecycle information of the server and the client
Ease of use: can quickly match global transactions to corresponding link logs based on XIDAlibaba Cloud Service
ELKAlibaba Cloud Service

Metrics dimension

Design Ideas

  1. Seata as an integrated data consistency framework, the Metrics module will use as few third-party dependencies as possible to reduce the risk of conflicts.
  2. the Metrics module will strive for higher metrics performance and lower resource overhead to minimise the side effects of turning it on.
  3. When configured, whether Metrics is activated and how the data is published depends on the corresponding configuration; if the configuration is turned on, Metrics is automatically activated and the metrics data will be published via prometheusexporter by default.
  4. do not use Spring, use SPI (Service Provider Interface) to load extensions

module design

图片 1.png

  • seata-metrics-core: Metrics core module, organises (loads) 1 Registry and N Exporters according to configuration;
  • seata-metrics-api: defines the Meter metrics interface, the Registry metrics registry interface;
  • seata-metrics-exporter-prometheus: built-in implementation of prometheus-exporter;
  • seata-metrics-registry-compact: built-in Registry implementation and lightweight implementation of Gauge, Counter, Summay, Timer metrics;

metrics module workflow

图片 1.png The above figure shows the workflow of the metrics module, which works as follows:

  1. load Exporter and Registry implementation classes based on configuration using SPI mechanism;
  2. based on the message subscription and notification mechanism, listen for state change events for all global transactions and publish to EventBus;
  3. event subscribers consume events and write the generated metrics to Registry;
  4. the monitoring system (e.g. prometheus) pulls data from the Exporter.

TC Core Metrics

image.png

TM Core Metrics

image.png

RM Core Metrics

image.png

Large Cap Showcase

lQLPJxZhZlqESU3NBpjNBp6w8zYK6VbMgzYCoKVrWEDWAA_1694_1688.png

Tracing dimensions

Why does Seata need tracing?

  1. for the business side, how much of a drain on business performance will the introduction of Seata cause? Where is the main time consumption? How to optimise the business logic? These are all unknown.
  2. All the message records of Seata are persisted through the log to fall the disc, but the log is very unfriendly to users who do not understand Seata. Can we improve the efficiency of transaction link scheduling by accessing Tracing?
  3. for novice users, can through Tracing records, quickly understand the working principle of seata, reduce the threshold of seata use.

Seata's tracing solution

  • Seata defines Header information in a custom RPC message protocol;
  • SkyWalking intercepts the specified RPC message and injects tracing related span information;
  • The lifecycle scope of the span is defined with the RPC message sending & receiving as the threshold.

Based on the above approach, Seata implements transaction-wide tracing, please refer to Accessing Skywalking for [Seata application | Seata-server] for specific access.

tracing effects

  • Based on the demo scenario:
  1. user requests transaction service
  2. transaction service locks inventory
  3. the transaction service creates a bill
  4. The billing service performs a debit

image.png

  • Transaction link for GlobalCommit success (example)

image.png image.png image.png

Logging dimension

Design Ideas

image.png Logging is the bottom of the observable dimensions. Placed at the bottom, in fact, is the design of our log format, only a good log format, we can make it a better collection, modular storage and display. On top of it, is the log collection, storage, monitoring, alarms, data visualisation, these modules are more ready-made tools, such as Ali's SLS logging service, and ELK's set of technology stack, we are more overhead costs, access complexity, ecological prosperity, etc. as a consideration.

Log format design

Here we take a log format of Seata-Server as a case study: image.png

  • Thread pool canonical naming: When there are more thread pools and threads, canonical thread naming can clearly show the execution order of the threads that are executed in an unordered way.
  • Traceability of method class names: Quickly locate specific code blocks.
  • Key Runtime Information Transparency: Focus on highlighting key logs and not printing non-critical logs to reduce log redundancy.
  • Extensible message format: Reduce the amount of code modification by extending the output format of the message class.

Summary & Outlook

Metrics

Summary: basically achieve quantifiable and observable distributed transactions. Prospect: more granular metrics, broader ecological compatibility.

Tracing

Summary: Traceability of the whole chain of distributed transactions. Prospect: trace transaction links according to xid, and quickly locate the root cause of abnormal links.

Logging

Summary: Structured log format. Prospect: Evolution of log observable system.

· 2 min read

Production-ready seata-go 1.2.0 is here!

Seata is an open source distributed transaction solution that provides high-performance and easy-to-use distributed transaction services.

Release overview

Seata-go version 1.2.0 supports the XA schema, a distributed transaction processing specification proposed by the X/Open organisation, which has the advantage of being non-intrusive to business code. Currently, Seata-go's XA mode supports MySQL databases. So far, seata-go has gathered three transaction modes: AT, TCC and XA. Main features of XA mode.

feature

  • [#467] implements XA schema support for MySQL.
  • [#534] Load balancing for session support.

bugfix

  • [#540] Fix a bug in initialising xa mode.
  • [#545] Fix a bug in getting db version number in xa mode.
  • [#548] Fix bug where starting xa fails.
  • [#556] Fix bug in xa datasource.
  • [#562] Fix bug with committing xa global transaction
  • [#564] Fix bug committing xa branching transactions
  • [#566] Fix bug with local transactions using xa data source.

optimise

  • [#523] Optimise CI process
  • [#525] rename jackson serialisation to json
  • [#532] Remove duplicate code
  • [#536] optimise go import code formatting
  • [#554] optimise xa mode performance
  • [#561] Optimise xa mode logging output

test

  • [#535] Add integration tests.

doc

  • [#550] Added changelog for version 1.2.0.

contributors

Thanks to these contributors for their code commits. Please report an unintended omission.

· 4 min read

Welcome everyone to register for Seata Open Source Summer 2023 topics

The registration period for Open Source Summer 2023 runs from April 29th to June 4th, and we welcome registration for Seata 2023 topics! Here, you will have the opportunity to delve into the theory and application of distributed transactions, and collaborate with students from different backgrounds to complete practical projects. We look forward to your active participation and contribution, jointly promoting the development of the distributed transaction field.

summer2023-1

Seata Open Source Summer 2023

Open Source Summer is a summer open source activity initiated and long-term supported by the Institute of Software, Chinese Academy of Sciences, as part of the "Open Source Software Supply Chain Lighting Program", aimed at encouraging students to actively participate in the development and maintenance of open source software, cultivating and discovering more outstanding developers, promoting the vigorous development of excellent open source software communities, and assisting in the construction of the open source software supply chain.

Participating students will collaborate remotely online with senior mentors to participate in the development of various organizational projects in the open source community and receive bonuses, gifts, and certificates. These gains are not only a highlight on future graduation resumes but also a brilliant starting point towards becoming top developers. Each project is divided into two difficulty levels: basic and advanced, with corresponding project completion bonuses of RMB 8,000 and RMB 12,000 before tax, respectively.

Introduction to the Seata Community

Seata is an open-source distributed transaction solution, with over 23K+ stars on GitHub, dedicated to providing high-performance and easy-to-use distributed transaction services under the microservices architecture. Before being open-sourced, Seata played a role as a middleware for distributed data consistency within Alibaba, where almost every transaction needed to use Seata. After undergoing the baptism of Double 11's massive traffic, it provided strong technical support for business.

Summary of Seata Community Open Source Summer 2023 Project Topics

The Seata community recommends 6 selected project topics for the Open Source Summer 2023 organizing committee. You can visit the following link to make your selection:
https://summer-ospp.ac.cn/org/orgdetail/064c15df-705c-483a-8fc8-02831370db14?lang=zh
Please communicate promptly with the respective mentors and prepare project application materials, and register through the official channels (the following topics are not listed in any particular order):

seata2023-2

Project One: Implementation of NamingServer for Service Discovery and Registration

Difficulty: Advanced

Project Community Mentor: Chen Jianbin

Mentor's Contact Email: 364176773@qq.com

Project Description:
Currently, Seata's service exposure and discovery mainly rely on third-party registration centers. With the evolution and development of the project, it brings additional learning and usage costs. Most mainstream middleware with servers have begun to evolve their own service self-loop and control and provide components or functions with higher compatibility and reliability to the server, such as Kafka's KRaft, RocketMQ's NameServer, ClickHouse's ClickHouse Keeper, etc. To address the above problems and architectural evolution requirements, Seata needs to build its own NamingServer to ensure more stability and reliability.

Project Link: https://summer-ospp.ac.cn/org/prodetail/230640380?list=org&navpage=org

...

(Projects Two to Six translated in the same manner)

...

How to Participate in Seata Open Source Summer 2023 and Quickly Select a Project?

Welcome to communicate with each mentor through the above contact information and prepare project application materials.

During the project participation period, students can work online from anywhere in the world. The completion of Seata-related projects needs to be submitted to the Seata community repository as a PR by September 30th, so please prepare early.

seata2023-3

To obtain information about mentors and other information during the project, you can scan the QR code below to enter the DingTalk group for communication —— Understand various projects in the Seata community, meet Seata community open source mentors, and help with subsequent applications.

summer2023-4

Reference Materials:

Seata Website: https://seata.apache.org/

Seata GitHub: https://github.com/seata

Open Source Summer Official Website: https://summer-ospp.ac.cn/org/orgdetail/ab188e59-fab8-468f-bc89-bdc2bd8b5e64?lang=zh

If students are interested in other areas of microservices projects, they can also try to apply, such as:

  • For students interested in microservice configuration registration centers, they can try to apply for Nacos Open Source Summer;
  • For students interested in microservice frameworks and RPC frameworks, they can try to apply for Spring Cloud Alibaba Open Source Summer and Dubbo Open Source Summer;
  • For students interested in cloud-native gateways, they can try to apply for Higress Open Source Summer;
  • For students interested in distributed high-availability protection, they can try to apply for [Sentinel Open Source Summer](https://summer-ospp.ac. cn/org/orgdetail/5e879522-bd90-4a8b-bf8b-b11aea48626b?lang=zh);
  • For students interested in microservices governance, they can try to apply for [OpenSergo Open Source Summer](https://summer-ospp.ac. cn/org/orgdetail/aaff4eec-11b1-4375-997d-5eea8f51762b?lang=zh).

· 6 min read

Seata 1.6.0 Released with Significant Performance Improvement

Seata is an open-source distributed transaction solution that provides high performance and easy-to-use distributed transaction services.

Download Links for seata-server:

source | binary

Updates in this version:

feature:

  • [#4863] Support for multiple primary keys in Oracle and PostgreSQL
  • [#4649] Support for multiple registry centers in seata-server
  • [#4779] Support for Apache Dubbo3
  • [#4479] TCC annotations can now be added to interfaces and implementation classes
  • [#4877] Client SDK supports JDK17
  • [#4914] Support for update join syntax for MySQL
  • [#4542] Support for Oracle timestamp type
  • [#5111] Support for Nacos contextPath configuration
  • [#4802] Dockerfile supports arm64

Bug Fixes:

  • [#4780] Fixed the issue where TimeoutRollbacked event wasn't sent after a successful timeout rollback.
  • [#4954] Fixed NullPointerException when the output expression was incorrect.
  • [#4817] Fixed the problem with non-standard configuration in higher versions of Spring Boot.
  • [#4838] Fixed the issue where undo log wasn't generated when using Statement.executeBatch().
  • [#4533] Fixed inaccurate metric data caused by duplicate events handling for handleRetryRollbacking.
  • [#4912] Fixed the issue where mysql InsertOnDuplicateUpdate couldn't correctly match column names due to inconsistent case.
  • [#4543] Fixed support for Oracle nclob data type.
  • [#4915] Fixed the problem of not obtaining ServerRecoveryProperties attributes.
  • [#4919] Fixed the issue where XID's port and address appeared as null:0.
  • [#4928] Fixed NPE issue in rpcContext.getClientRMHolderMap.
  • [#4953] Fixed the issue where InsertOnDuplicateUpdate could bypass primary key modification.
  • [#4978] Fixed kryo support for cyclic dependencies.
  • [#4985] Fixed the issue of duplicate undo_log id.
  • [#4874] Fixed startup failure with OpenJDK 11.
  • [#5018] Fixed server startup failure issue due to loader path using relative path in startup script.
  • [#5004] Fixed the issue of duplicate row data in mysql update join.
  • [#5032] Fixed the abnormal SQL statement in mysql InsertOnDuplicateUpdate due to incorrect calculation of condition parameter fill position.
  • [#5033] Fixed NullPointerException issue in SQL statement of InsertOnDuplicateUpdate due to missing insert column field.
  • [#5038] Fixed SagaAsyncThreadPoolProperties conflict issue.
  • [#5050] Fixed the issue where global status under Saga mode wasn't correctly changed to Committed.
  • [#5052] Fixed placeholder parameter issue in update join condition.
  • [#5031] Fixed the issue of using null value index as query condition in InsertOnDuplicateUpdate.
  • [#5075] Fixed the inability to intercept SQL statements with no primary key and unique index in InsertOnDuplicateUpdate.
  • [#5093] Fixed accessKey loss issue after seata server restart.
  • [#5092] Fixed the issue of incorrect AutoConfiguration order when seata and jpa are used together.
  • [#5109] Fixed NPE issue when @GlobalTransactional is not applied on RM side.
  • [#5098] Disabled oracle implicit cache for Druid.
  • [#4860] Fixed metrics tag override issue.
  • [#5028] Fixed null value issue in insert on duplicate SQL.
  • [#5078] Fixed interception issue for SQL statements without primary keys and unique keys.
  • [#5097] Fixed accessKey loss issue when Server restarts.
  • [#5131] Fixed issue where XAConn cannot rollback when in active state.
  • [#5134] Fixed issue where hikariDataSource auto proxy fails in some cases.
  • [#5163] Fixed compilation failure in higher versions of JDK.

Optimization:

  • [#4681] Optimized the process of competing locks.
  • [#4774] Optimized mysql8 dependency in seataio/seata-server image.
  • [#4750] Optimized the release of global locks in AT branch to not use xid.
  • [#4790] Added automatic OSSRH github action publishing.
  • [#4765] XA mode in mysql8.0.29 and above no longer holds connection to the second phase.
  • [#4797] Optimized all github actions scripts.
  • [#4800] Added NOTICE file.
  • [#4761] Used hget instead of hmget in RedisLocker.
  • [#4414] Removed log4j dependency.
  • [#4836] Improved readability of BaseTransactionalExecutor#buildLockKey(TableRecords rowsIncludingPK) method.
  • [#4865] Fixed security vulnerabilities in Saga visualization designer GGEditor.
  • [#4590] Dynamic configuration support for automatic degradation switch.
  • [#4490] Optimized tccfence record table to delete by index.
  • [#4911] Added header and license checks.
  • [#4917] Upgraded package-lock.json to fix vulnerabilities.
  • [#4924] Optimized pom dependencies.
  • [#4932] Extracted default values for some configurations.
  • [#4925] Optimized javadoc comments.
  • [#4921] Fixed security vulnerabilities in console module and upgraded skywalking-eyes version.
  • [#4936] Optimized storage configuration reading.
  • [#4946] Passed SQL exceptions encountered when acquiring locks to the client.
  • [#4962] Optimized build configuration and corrected base image of docker image.
  • [#4974] Removed limitation on querying globalStatus quantity under redis mode.
  • [#4981] Improved error message when tcc fence record cannot be found.
  • [#4995] Fixed duplicate primary key query conditions in the SQL statement after mysql InsertOnDuplicateUpdate.
  • [#5047] Removed unused code.
  • [#5051] When undolog generates dirty write during rollback, throw exception BranchRollbackFailed_Unretriable.
  • [#5075] Intercept insert on duplicate update statements without primary keys and unique indexes.
  • [#5104] ConnectionProxy is no longer dependent on druid.
  • [#5124] Support deleting TCC fence record table for oracle.
  • [#4468] Support kryo 5.3.0.
  • [#4807] Optimized image and OSS repository publishing pipelines.
  • [#4445] Optimized transaction timeout judgment.
  • [#4958] Optimized execution of triggerAfterCommit() for timeout transactions.
  • [#4582] Optimized transaction sorting in redis storage mode.
  • [#4963] Added ARM64 pipeline CI testing.
  • [#4434] Removed seata-server CMS GC parameters.

Testing:

  • [#4411] Tested Oracle database AT mode type support.
  • [#4794] Refactored code and attempted to fix unit test DataSourceProxyTest.getResourceIdTest().
  • [#5101] Fixed ClassNotFoundException issue in zk registration and configuration center DataSourceProxyTest.getResourceIdTest().

Special thanks to the following contributors for their code contributions. If there are any unintentional omissions, please report.

At the same time, we have received many valuable issues and suggestions from the community, and we are very grateful to everyone.

· 3 min read

Seata 1.5.2 Released with XID Load Balancing Support

Seata is an open-source distributed transaction solution that provides high-performance and easy-to-use distributed transaction services.

seata-server Download Links:

source | binary

The key updates in this version include:

Features:

  • [#4661] Added support for XID load balancing algorithm.
  • [#4676] Added support for Seata server to expose services through SLB when using Nacos as the registry center.
  • [#4642] Added support for parallel processing of client batch requests.
  • [#4567] Added support for the find_in_set function in the WHERE condition.

Bug Fixes:

  • [#4515] Fixed an issue where SeataTCCFenceAutoConfiguration on the develop branch throws a ClassNotFoundException when the client does not use a DB.
  • [#4661] Fixed SQL exceptions when using PostgreSQL in the console.
  • [#4667] Fixed an exception when updating the map in RedisTransactionStoreManager on the develop branch.
  • [#4678] Fixed the issue of cache penetration when the property transport.enableRmClientBatchSendRequest is not configured.
  • [#4701] Fixed the issue of missing command line parameters.
  • [#4607] Fixed a defect in skipping global lock verification.
  • [#4696] Fixed the insertion issue when using the Oracle storage mode.
  • [#4726] Fixed a possible NPE issue when sending messages in batches.
  • [#4729] Fixed the issue of incorrect setting of AspectTransactional.rollbackForClassName.
  • [#4653] Fixed the exception of non-numeric primary key in INSERT_ON_DUPLICATE.

Optimizations:

  • [#4650] Fixed a security vulnerability.
  • [#4670] Optimized the number of threads in the branchResultMessageExecutor thread pool.
  • [#4662] Optimized the monitoring metrics for rolling back transactions.
  • [#4693] Optimized the console navigation bar.
  • [#4700] Fixed failures in the execution of maven-compiler-plugin and maven-resources-plugin.
  • [#4711] Separated the lib dependency during deployment.
  • [#4720] Optimized pom descriptions.
  • [#4728] Upgraded the logback version dependency to 1.2.9.
  • [#4745] Added support for mysql8 driver in the distribution package.
  • [#4626] Used easyj-maven-plugin plugin instead of flatten-maven-plugin to fix compatibility issues between shade plugin and flatten plugin.
  • [#4629] Checked the constraint relationship before and after updating the globalSession status.
  • [#4662] Optimized the readability of EnhancedServiceLoader.

Tests:

  • [#4544] Optimized the jackson package dependency issue in TransactionContextFilterTest.
  • [#4731] Fixed unit test issues in AsyncWorkerTest and LockManagerTest.

A big thanks to the contributors for their valuable code contributions. If inadvertently omitted, please report.

At the same time, we have received many valuable issues and suggestions from the community, thank you very much.

· 11 min read

Today, let's talk about how the new version (1.5.1) of Alibaba's Seata resolves the issues of idempotence, dangling, and empty rollback in TCC mode.

1 TCC

TCC mode is the most classic solution for distributed transactions. It divides the distributed transaction into two phases. In the try phase, resources are reserved for each branch transaction. If all branch transactions successfully reserve resources, the global transaction proceeds to the commit phase for committing the transaction globally. However, if any node fails to reserve resources, the global transaction enters the cancel phase to rollback the transaction globally.

Taking traditional order, inventory, and account services as an example, in the try phase, resources are attempted to be reserved by inserting orders, deducting inventory, and deducting amounts. These three services require local transaction commits, and the resources can be transferred to an intermediate table. In the commit phase, the resources reserved in the try phase are transferred to the final table. In the cancel phase, the resources reserved in the try phase are released, such as returning the account amount to the customer's account.

Note: The try phase must involve committing local transactions. For example, when deducting the order amount, the money must be deducted from the customer's account. If it is not deducted, there will be a problem in the commit phase if the customer's account does not have enough money.

1.1 try-commit

In the try phase, resources are first reserved, and then they are deducted in the commit phase. The diagram below illustrates this process:

fence-try-commit

1.2 try-cancel

In the try phase, resources are first reserved. If the deduction of inventory fails, leading to a rollback of the global transaction, the resources are released in the cancel phase. The diagram below illustrates this process:

fence-try-cancel

2 TCC Advantages

The biggest advantage of TCC mode is its high efficiency. In the try phase, the resource locking in TCC mode is not a true lock, but rather a real local transaction submission that reserves resources in an intermediate state without the need for blocking and waiting. Therefore, it is more efficient than other modes.

Additionally, the TCC mode can be optimized as follows:

2.1 Asynchronous Commit

After the try phase succeeds, instead of immediately entering the confirm/cancel phase, it is considered that the global transaction has already ended. A scheduled task is started to asynchronously execute the confirm/cancel phase, which involves deducting or releasing resources. This approach can greatly improve performance.

2.2 Same-Database Mode

In the TCC mode, there are three roles:

  • TM: Manages the global transaction, including starting the global transaction and committing/rolling back the global transaction.
  • RM: Manages the branch transaction.
  • TC: Manages the state of the global transaction and branch transactions.

The diagram below is from the Seata official website:

fence-fiffrent-db

When TM starts a global transaction, RM needs to send a registration message to TC, and TC saves the state of the branch transaction. When TM requests a commit or rollback, TC needs to send commit or rollback messages to RM. In this way, in a distributed transaction with two branch transactions, there are four RPCs between TC and RM.

After optimization, the process is as shown in the diagram below:

TC saves the state of the global transaction. When TM starts a global transaction, RM no longer needs to send a registration message to TC. Instead, it saves the state of the branch transaction locally. After TM sends a commit or rollback message to TC, the asynchronous thread in RM first retrieves the uncommitted branch transactions saved locally, and then sends a message to TC to obtain the state of the global transaction in which the local branch transaction is located, in order to determine whether to commit or rollback the local transaction.

With this optimization, the number of RPCs is reduced by 50%, resulting in a significant performance improvement.

3 RM Code Example

Taking the inventory service as an example, the RM inventory service interface code is as follows:

@LocalTCC
public interface StorageService {

/**
* decrease
* @param xid
* @param productId
* @param count
* @return
*/
@TwoPhaseBusinessAction(name = "storageApi", commitMethod = "commit", rollbackMethod = "rollback", useTCCFence = true)
boolean decrease(String xid, Long productId, Integer count);

/**
* commit
* @param actionContext
* @return
*/
boolean commit(BusinessActionContext actionContext);

/**
* rollback
* @param actionContext
* @return
*/
boolean rollback(BusinessActionContext actionContext);
}

By using the @LocalTCC annotation, when the RM is initialized, it registers a branch transaction with the TC. The try phase method (e.g., decrease method) is annotated with @TwoPhaseBusinessAction, which defines the branch transaction's resourceId, commit method, cancel method, and the useTCCFence property, which will be explained in the next section.

4 Issues with TCC

There are three major issues with the TCC pattern: idempotence, suspension, and empty rollback. In version 1.5.1 of Seata, a transaction control table named tcc_fence_log is introduced to address these issues. The useTCCFence property mentioned in the previous @TwoPhaseBusinessAction annotation is used to enable or disable this mechanism, with a default value of false.

The creation SQL statement for the tcc_fence_log table (in MySQL syntax) is as follows:

CREATE TABLE IF NOT EXISTS `tcc_fence_log`
(
`xid` VARCHAR(128) NOT NULL COMMENT 'global id',
`branch_id` BIGINT NOT NULL COMMENT 'branch id',
`action_name` VARCHAR(64) NOT NULL COMMENT 'action name',
`status` TINYINT NOT NULL COMMENT 'status(tried:1;committed:2;rollbacked:3;suspended:4)',
`gmt_create` DATETIME(3) NOT NULL COMMENT 'create time',
`gmt_modified` DATETIME(3) NOT NULL COMMENT 'update time',
PRIMARY KEY (`xid`, `branch_id`),
KEY `idx_gmt_modified` (`gmt_modified`),
KEY `idx_status` (`status`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8mb4;

4.1 Idempotence

During the commit/cancel phase, if the TC does not receive a response from the branch transaction, it needs to retry the operation. Therefore, it is necessary for the branch transaction to support idempotence.

Let's take a look at how this is addressed in the new version. The following code is from the TCCResourceManager class:

@Override
public BranchStatus branchCommit(BranchType branchType, String xid, long branchId, String resourceId,
String applicationData) throws TransactionException {
TCCResource tccResource = (TCCResource)tccResourceCache.get(resourceId);
Object targetTCCBean = tccResource.getTargetBean();
Method commitMethod = tccResource.getCommitMethod();
try {
//BusinessActionContext
BusinessActionContext businessActionContext = getBusinessActionContext(xid, branchId, resourceId,
applicationData);
Object[] args = this.getTwoPhaseCommitArgs(tccResource, businessActionContext);
Object ret;
boolean result;
//whether the useTCCFence property is set to true
if (Boolean.TRUE.equals(businessActionContext.getActionContext(Constants.USE_TCC_FENCE))) {
try {
result = TCCFenceHandler.commitFence(commitMethod, targetTCCBean, xid, branchId, args);
} catch (SkipCallbackWrapperException | UndeclaredThrowableException e) {
throw e.getCause();
}
} else {
}
LOGGER.info("TCC resource commit result : {}, xid: {}, branchId: {}, resourceId: {}", result, xid, branchId, resourceId);
return result ? BranchStatus.PhaseTwo_Committed : BranchStatus.PhaseTwo_CommitFailed_Retryable;
} catch (Throwable t) {
return BranchStatus.PhaseTwo_CommitFailed_Retryable;
}
}

The above code shows that when executing the commit method of the branch transaction, it first checks if the useTCCFence property is true. If it is true, it follows the commitFence logic in the TCCFenceHandler class; otherwise, it follows the normal commit logic.

The commitFence method in the TCCFenceHandler class calls the commitFence method of the same class. The code is as follows:

public static boolean commitFence(Method commitMethod, Object targetTCCBean,
String xid, Long branchId, Object[] args) {
return transactionTemplate.execute(status -> {
try {
Connection conn = DataSourceUtils.getConnection(dataSource);
TCCFenceDO tccFenceDO = TCC_FENCE_DAO.queryTCCFenceDO(conn, xid, branchId);
if (tccFenceDO == null) {
throw new TCCFenceException(String.format("TCC fence record not exists, commit fence method failed. xid= %s, branchId= %s", xid, branchId),
FrameworkErrorCode.RecordAlreadyExists);
}
if (TCCFenceConstant.STATUS_COMMITTED == tccFenceDO.getStatus()) {
LOGGER.info("Branch transaction has already committed before. idempotency rejected. xid: {}, branchId: {}, status: {}", xid, branchId, tccFenceDO.getStatus());
return true;
}
if (TCCFenceConstant.STATUS_ROLLBACKED == tccFenceDO.getStatus() || TCCFenceConstant.STATUS_SUSPENDED == tccFenceDO.getStatus()) {
if (LOGGER.isWarnEnabled()) {
LOGGER.warn("Branch transaction status is unexpected. xid: {}, branchId: {}, status: {}", xid, branchId, tccFenceDO.getStatus());
}
return false;
}
return updateStatusAndInvokeTargetMethod(conn, commitMethod, targetTCCBean, xid, branchId, TCCFenceConstant.STATUS_COMMITTED, status, args);
} catch (Throwable t) {
status.setRollbackOnly();
throw new SkipCallbackWrapperException(t);
}
});
}

From the code, we can see that when committing the transaction, it first checks if there is a record in the tcc_fence_log table. If a record exists, it checks the transaction execution status and returns. This ensures idempotence by avoiding duplicate commits if the transaction status is already STATUS_COMMITTED. If there is no record in the tcc_fence_log table, a new record is inserted for later retry detection.

The rollback logic is similar to the commit logic and is implemented in the rollbackFence method of the TCCFenceHandler class.

4.2 Empty Rollback

In the scenario shown in the following diagram, the account service consists of a cluster of two nodes. During the try phase, the account service on Node 1 encounters a failure. Without considering retries, the global transaction must reach the end state, requiring a cancel operation to be performed on the account service.

fence-empty-rollback

Seata's solution is to insert a record into the tcc_fence_log table during the try phase, with the status field set to STATUS_TRIED. During the rollback phase, it checks if the record exists, and if it doesn't, the rollback operation is not executed. The code is as follows:

//TCCFenceHandler 
public static Object prepareFence(String xid, Long branchId, String actionName, Callback<Object> targetCallback) {
return transactionTemplate.execute(status -> {
try {
Connection conn = DataSourceUtils.getConnection(dataSource);
boolean result = insertTCCFenceLog(conn, xid, branchId, actionName, TCCFenceConstant.STATUS_TRIED);
LOGGER.info("TCC fence prepare result: {}. xid: {}, branchId: {}", result, xid, branchId);
if (result) {
return targetCallback.execute();
} else {
throw new TCCFenceException(String.format("Insert tcc fence record error, prepare fence failed. xid= %s, branchId= %s", xid, branchId),
FrameworkErrorCode.InsertRecordError);
}
} catch (TCCFenceException e) {
} catch (Throwable t) {
}
});
}

The processing logic in the Rollback phase is as follows:

//TCCFenceHandler 
public static boolean rollbackFence(Method rollbackMethod, Object targetTCCBean,
String xid, Long branchId, Object[] args, String actionName) {
return transactionTemplate.execute(status -> {
try {
Connection conn = DataSourceUtils.getConnection(dataSource);
TCCFenceDO tccFenceDO = TCC_FENCE_DAO.queryTCCFenceDO(conn, xid, branchId);
// non_rollback
if (tccFenceDO == null) {
//The rollback logic is not executed
return true;
} else {
if (TCCFenceConstant.STATUS_ROLLBACKED == tccFenceDO.getStatus() || TCCFenceConstant.STATUS_SUSPENDED == tccFenceDO.getStatus()) {
LOGGER.info("Branch transaction had already rollbacked before, idempotency rejected. xid: {}, branchId: {}, status: {}", xid, branchId, tccFenceDO.getStatus());
return true;
}
if (TCCFenceConstant.STATUS_COMMITTED == tccFenceDO.getStatus()) {
if (LOGGER.isWarnEnabled()) {
LOGGER.warn("Branch transaction status is unexpected. xid: {}, branchId: {}, status: {}", xid, branchId, tccFenceDO.getStatus());
}
return false;
}
}
return updateStatusAndInvokeTargetMethod(conn, rollbackMethod, targetTCCBean, xid, branchId, TCCFenceConstant.STATUS_ROLLBACKED, status, args);
} catch (Throwable t) {
status.setRollbackOnly();
throw new SkipCallbackWrapperException(t);
}
});
}

updateStatusAndInvokeTargetMethod method executes the following SQL:

update tcc_fence_log set status = ?, gmt_modified = ?
where xid = ? and branch_id = ? and status = ? ;

As we can see, it updates the value of the status field in the tcc_fence_log table from STATUS_TRIED to STATUS_ROLLBACKED. If the update is successful, the rollback logic is executed.

4.3 Hanging

Hanging refers to a situation where, due to network issues, the RM did not receive the try instruction initially, but after executing the rollback, the RM receives the try instruction and successfully reserves resources. This leads to the inability to release the reserved resources, as shown in the following diagram:

fence-suspend

Seata solves this problem by checking if there is a record for the current xid in the tcc_fence_log table before executing the rollback method. If there is no record, it inserts a new record into the tcc_fence_log table with the status STATUS_SUSPENDED and does not perform the rollback operation. The code is as follows:

public static boolean rollbackFence(Method rollbackMethod, Object targetTCCBean,
String xid, Long branchId, Object[] args, String actionName) {
return transactionTemplate.execute(status -> {
try {
Connection conn = DataSourceUtils.getConnection(dataSource);
TCCFenceDO tccFenceDO = TCC_FENCE_DAO.queryTCCFenceDO(conn, xid, branchId);
// non_rollback
if (tccFenceDO == null) {
boolean result = insertTCCFenceLog(conn, xid, branchId, actionName, TCCFenceConstant.STATUS_SUSPENDED);
return true;
} else {
}
return updateStatusAndInvokeTargetMethod(conn, rollbackMethod, targetTCCBean, xid, branchId, TCCFenceConstant.STATUS_ROLLBACKED, status, args);
} catch (Throwable t) {
}
});
}

When executing the try phase method, a record for the current xid is first inserted into the tcc_fence_log table, which causes a primary key conflict. The code is as follows:

//TCCFenceHandler 
public static Object prepareFence(String xid, Long branchId, String actionName, Callback<Object> targetCallback) {
return transactionTemplate.execute(status -> {
try {
Connection conn = DataSourceUtils.getConnection(dataSource);
boolean result = insertTCCFenceLog(conn, xid, branchId, actionName, TCCFenceConstant.STATUS_TRIED);
} catch (TCCFenceException e) {
if (e.getErrcode() == FrameworkErrorCode.DuplicateKeyException) {
LOGGER.error("Branch transaction has already rollbacked before,prepare fence failed. xid= {},branchId = {}", xid, branchId);
addToLogCleanQueue(xid, branchId);
}
status.setRollbackOnly();
throw new SkipCallbackWrapperException(e);
} catch (Throwable t) {
}
});
}

Note: The queryTCCFenceDO method in the SQL statement uses for update, so there is no need to worry about not being able to determine the execution result of the local transaction in the rollback method due to the inability to obtain records from the tcc_fence_log table.

5 Summary

TCC mode is a very important transaction mode in distributed transactions. However, idempotence, hanging, and empty rollback have always been issues that need to be considered in TCC mode. The Seata framework perfectly solves these problems in version 1.5.1. The operations on the tcc_fence_log table also need to consider transaction control. Seata uses a proxy data source to execute the operations on the tcc_fence_log table and the RM business operations in the same local transaction. This ensures that the local operations and the operations on the tcc_fence_log table succeed or fail together.

· 12 min read

Seata currently supports AT mode, XA mode, TCC mode, and SAGA mode. Previous articles have talked more about non-intrusive AT mode. Today, we will introduce TCC mode, which is also a two-phase commit.

What is TCC

TCC is a two-phase commit protocol in distributed transactions. Its full name is Try-Confirm-Cancel. Their specific meanings are as follows:

  1. Try: Check and reserve business resources;
  2. Confirm: Commit the business transaction, i.e., the commit operation. If Try is successful, this step will definitely be successful;
  3. Cancel: Cancel the business transaction, i.e., the rollback operation. This step will release the resources reserved in Try.

TCC is an intrusive distributed transaction solution. All three operations need to be implemented by the business system itself, which has a significant impact on the business system. The design is relatively complex, but the advantage is that TCC does not rely on the database. It can manage resources across databases and applications, and can implement an atomic operation for different data access through intrusive coding, better solving the distributed transaction problems in various complex business scenarios.

img

Seata TCC mode

Seata TCC mode follows the same principle as the general TCC mode. Let's first use Seata TCC mode to implement a distributed transaction:

Suppose there is a business that needs to use service A and service B to complete a transaction operation. We define a TCC interface for this service in service A:

public interface TccActionOne {
@TwoPhaseBusinessAction(name = "DubboTccActionOne", commitMethod = "commit", rollbackMethod = "rollback")
public boolean prepare(BusinessActionContext actionContext, @BusinessActionContextParameter(paramName = "a") String a);

public boolean commit(BusinessActionContext actionContext);

public boolean rollback(BusinessActionContext actionContext);
}

Similarly, we define a TCC interface for this service in service B:

public interface TccActionTwo {
@TwoPhaseBusinessAction(name = "DubboTccActionTwo", commitMethod = "commit", rollbackMethod = "rollback")
public void prepare(BusinessActionContext actionContext, @BusinessActionContextParameter(paramName = "b") String b);

public void commit(BusinessActionContext actionContext);

public void rollback(BusinessActionContext actionContext);
}

In the business system, we start a global transaction and execute the TCC reserve resource methods for service A and service B:

@GlobalTransactional
public String doTransactionCommit(){
// Service A transaction participant
tccActionOne.prepare(null,"one");
// Service B transaction participant
tccActionTwo.prepare(null,"two");
}

The example above demonstrates the implementation of a global transaction using Seata TCC mode. It can be seen that the TCC mode also uses the @GlobalTransactional annotation to initiate a global transaction, while the TCC interfaces of Service A and Service B are transaction participants. Seata treats a TCC interface as a Resource, also known as a TCC Resource.

TCC interfaces can be RPC or internal JVM calls, meaning that a TCC interface has both a sender and a caller identity. In the example above, the TCC interface is the sender in Service A and Service B, and the caller in the business system. If the TCC interface is a Dubbo RPC, the caller is a dubbo:reference and the sender is a dubbo:service.

img

When Seata starts, it scans and parses the TCC interfaces. If a TCC interface is a sender, Seata registers the TCC Resource with the TC during startup, and each TCC Resource has a resource ID. If a TCC interface is a caller, Seata proxies the caller and intercepts the TCC interface calls. Similar to the AT mode, the proxy intercepts the call to the Try method, registers a branch transaction with the TC, and then executes the original RPC call.

When the global transaction decides to commit/rollback, the TC will callback to the corresponding participant service to execute the Confirm/Cancel method of the TCC Resource using the resource ID registered by the branch.

How Seata Implements TCC Mode

From the above Seata TCC model, it can be seen that the TCC mode in Seata also follows the TC, TM, RM three-role model. How to implement TCC mode in these three-role models? I mainly summarize the implementation as resource parsing, resource management, and transaction processing.

Resource Parsing

Resource parsing is the process of parsing and registering TCC interfaces. As mentioned earlier, TCC interfaces can be RPC or internal JVM calls. In the Seata TCC module, there is a remoting module that is specifically used to parse TCC interfaces with the TwoPhaseBusinessAction annotation:

img

The RemotingParser interface mainly has methods such as isRemoting, isReference, isService, getServiceDesc, etc. The default implementation is DefaultRemotingParser, and the parsing of various RPC protocols is executed in DefaultRemotingParser. Seata has already implemented parsing of Dubbo, HSF, SofaRpc, and LocalTCC RPC protocols while also providing SPI extensibility for additional RPC protocol parsing classes.

During the Seata startup process, the GlobalTransactionScanner annotation is used for scanning and executes the following method:

io.seata.spring.util.TCCBeanParserUtils#isTccAutoProxy

The purpose of this method is to determine if the bean has been TCC proxied. In the process, it first checks if the bean is a Remoting bean. If it is, it calls the getServiceDesc method to parse the remoting bean, and if it is a sender, it registers the resource:

io.seata.rm.tcc.remoting.parser.DefaultRemotingParser#parserRemotingServiceInfo

public RemotingDesc parserRemotingServiceInfo(Object bean, String beanName, RemotingParser remotingParser){
RemotingDesc remotingBeanDesc = remotingParser.getServiceDesc(bean, beanName);
if(remotingBeanDesc == null){
return null;
}
remotingServiceMap.put(beanName, remotingBeanDesc);

Class<?> interfaceClass = remotingBeanDesc.getInterfaceClass();
Method[] methods = interfaceClass.getMethods();
if (remotingParser.isService(bean, beanName)) {
try {
//service bean, registry resource
Object targetBean = remotingBeanDesc.getTargetBean();
for (Method m : methods) {
TwoPhaseBusinessAction twoPhaseBusinessAction = m.getAnnotation(TwoPhaseBusinessAction.class);
if (twoPhaseBusinessAction != null) {
TCCResource tccResource = new TCCResource();
tccResource.setActionName(twoPhaseBusinessAction.name());
tccResource.setTargetBean(targetBean);
tccResource.setPrepareMethod(m);
tccResource.setCommitMethodName(twoPhaseBusinessAction.commitMethod());
tccResource.setCommitMethod(interfaceClass.getMethod(twoPhaseBusinessAction.commitMethod(),
twoPhaseBusinessAction.commitArgsClasses()));
tccResource.setRollbackMethodName(twoPhaseBusinessAction.rollbackMethod());
tccResource.setRollbackMethod(interfaceClass.getMethod(twoPhaseBusinessAction.rollbackMethod(),
twoPhaseBusinessAction.rollbackArgsClasses()));
// set argsClasses
tccResource.setCommitArgsClasses(twoPhaseBusinessAction.commitArgsClasses());
tccResource.setRollbackArgsClasses(twoPhaseBusinessAction.rollbackArgsClasses());
// set phase two method's keys
tccResource.setPhaseTwoCommitKeys(this.getTwoPhaseArgs(tccResource.getCommitMethod(),
twoPhaseBusinessAction.commitArgsClasses()));
tccResource.setPhaseTwoRollbackKeys(this.getTwoPhaseArgs(tccResource.getRollbackMethod(),
twoPhaseBusinessAction.rollbackArgsClasses()));
// registry tcc resource
DefaultResourceManager.get().registerResource(tccResource);
}
}
} catch (Throwable t) {
throw new FrameworkException(t, "parser remoting service error");
}
}
if (remotingParser.isReference(bean, beanName)) {
// reference bean, TCC proxy
remotingBeanDesc.setReference(true);
}
return remotingBeanDesc;
}

The above method first calls the parsing class getServiceDesc method to parse the remoting bean and puts the parsed remotingBeanDesc into the local cache remotingServiceMap. At the same time, it calls the parsing class isService method to determine if it is the initiator. If it is the initiator, it parses the content of the TwoPhaseBusinessAction annotation to generate a TCCResource and registers it as a resource.

Resource Management

1. Resource Registration

The resource for Seata TCC mode is called TCCResource, and its resource manager is called TCCResourceManager. As mentioned earlier, after parsing the TCC interface RPC resource, if it is the initiator, it will be registered as a resource:

io.seata.rm.tcc.TCCResourceManager#registerResource

public void registerResource(Resource resource){
TCCResource tccResource=(TCCResource)resource;
tccResourceCache.put(tccResource.getResourceId(),tccResource);
super.registerResource(tccResource);
}

TCCResource contains the relevant information of the TCC interface and is cached locally. It continues to call the parent class registerResource method (which encapsulates communication methods) to register with the TC. The TCC resource's resourceId is the actionName, and the actionName is the name in the @TwoParseBusinessAction annotation.

2. Resource Commit/Rollback

io.seata.rm.tcc.TCCResourceManager#branchCommit

public BranchStatus branchCommit(BranchType branchType,String xid,long branchId,String resourceId,
String applicationData)throws TransactionException{
TCCResource tccResource=(TCCResource)tccResourceCache.get(resourceId);
if(tccResource==null){
throw new ShouldNeverHappenException(String.format("TCC resource is not exist, resourceId: %s",resourceId));
}
Object targetTCCBean=tccResource.getTargetBean();
Method commitMethod=tccResource.getCommitMethod();
if(targetTCCBean==null||commitMethod==null){
throw new ShouldNeverHappenException(String.format("TCC resource is not available, resourceId: %s",resourceId));
}
try{
//BusinessActionContext
BusinessActionContext businessActionContext=getBusinessActionContext(xid,branchId,resourceId,
applicationData);
// ... ...
ret=commitMethod.invoke(targetTCCBean,args);
// ... ...
return result?BranchStatus.PhaseTwo_Committed:BranchStatus.PhaseTwo_CommitFailed_Retryable;
}catch(Throwable t){
String msg=String.format("commit TCC resource error, resourceId: %s, xid: %s.",resourceId,xid);
LOGGER.error(msg,t);
return BranchStatus.PhaseTwo_CommitFailed_Retryable;
}
}

When the TM resolves the phase two commit, the TC will callback to the corresponding participant (i.e., TCC interface initiator) service to execute the Confirm/Cancel method of the TCC Resource registered by the branch.

In the resource manager, the corresponding TCCResource will be found in the local cache based on the resourceId, and the corresponding BusinessActionContext will be found based on xid, branchId, resourceId, and applicationData, and the parameters to be executed are in the context. Finally, the commit method of the TCCResource is executed to perform the phase two commit.

The phase two rollback is similar.

Transaction Processing

As mentioned earlier, if the TCC interface is a caller, the Seata TCC proxy will be used to intercept the caller and register the branch before processing the actual RPC method call.

The method io.seata.spring.util.TCCBeanParserUtils#isTccAutoProxy not only parses the TCC interface resources, but also determines whether the TCC interface is a caller. If it is a caller, it returns true:

io.seata.spring.annotation.GlobalTransactionScanner#wrapIfNecessary

img

As shown in the figure, when GlobalTransactionalScanner scans the TCC interface caller (Reference), it will proxy and intercept it with TccActionInterceptor, which implements MethodInterceptor.

In TccActionInterceptor, it will also call ActionInterceptorHandler to execute the interception logic, and the transaction-related processing is in the ActionInterceptorHandler#proceed method:

public Object proceed(Method method, Object[] arguments, String xid, TwoPhaseBusinessAction businessAction,
Callback<Object> targetCallback) throws Throwable {
//Get action context from arguments, or create a new one and then reset to arguments
BusinessActionContext actionContext = getOrCreateActionContextAndResetToArguments(method.getParameterTypes(), arguments);
//Creating Branch Record
String branchId = doTccActionLogStore(method, arguments, businessAction, actionContext);
// ... ...
try {
// ... ...
return targetCallback.execute();
} finally {
try {
//to report business action context finally if the actionContext.getUpdated() is true
BusinessActionContextUtil.reportContext(actionContext);
} finally {
// ... ...
}
}
}

In the process of executing the first phase of the TCC interface, the doTccActionLogStore method is called for branch registration, and the TCC-related information such as parameters is placed in the context. This context will be used for resource submission/rollback as mentioned above.

How to control exceptions

In the process of executing the TCC model, various exceptions may occur, the most common of which are empty rollback, idempotence, and suspense. Here I will explain how Seata handles these three types of exceptions.

How to handle empty rollback

What is an empty rollback?

An empty rollback refers to a situation in a distributed transaction where the TM drives the second-phase rollback of the participant's Cancel method without calling the participant's Try method.

How does an empty rollback occur?

img

As shown in the above figure, after the global transaction is opened, participant A will execute the first-phase RPC method after completing branch registration. If the machine where participant A is located crashes or there is a network anomaly at this time, the RPC call will fail, meaning that participant A's first-phase method did not execute successfully. However, the global transaction has already been opened, so Seata must progress to the final state. When the global transaction is rolled back, participant A's Cancel method will be called, resulting in an empty rollback.

To prevent empty rollback, it is necessary to identify it in the Cancel method. How does Seata do this?

Seata's approach is to add a TCC transaction control table, which contains the XID and BranchID information of the transaction. A record is inserted when the Try method is executed, indicating that phase one has been executed. When the Cancel method is executed, this record is read. If the record does not exist, it means that the Try method was not executed.

How to Handle Idempotent Operations

Idempotent operation refers to TC repeating the two-phase commit, so the Confirm/Cancel interface needs to support idempotent processing, which means that it will not cause duplicate resource submission or release.

So how does idempotent operation arise?

img

As shown in the above figure, after participant A completes the two phases, network jitter or machine failure may cause TC not to receive the return result of participant A's execution of the two phases. TC will continue to make repeated calls until the two-phase execution result is successful.

How does Seata handle idempotent operations?

Similarly, a status field is added to the TCC transaction control table. This field has 3 values:

  1. tried: 1
  2. committed: 2
  3. rollbacked: 3

After the execution of the two-phase Confirm/Cancel method, the status is changed to committed or rollbacked. When the two-phase Confirm/Cancel method is called repeatedly, checking the transaction status can solve the idempotent problem.

How to Handle Suspend

Suspension refers to the two-phase Cancel method being executed before the phase Try method, because empty rollback is allowed. After the execution of the two-phase Cancel method, directly returning success, the global transaction has ended. However, because the Try method is executed later, this will cause the resources reserved by the phase Try method to never be committed or released.

So how does suspension arise?

img

As shown in the above figure, when participant A's phase Try method is executed, network congestion occurs, and due to Seata's global transaction timeout limit, after the Try method times out, TM resolves to roll back the global transaction. After the rollback is completed, if the RPC request arrives at participant A at this time and the Try method is executed to reserve resources, it will cause suspension.

How does Seata handle suspension?

Add a status to the TCC transaction control table:

  1. suspended: 4

When the two-phase Cancel method is executed, if it is found that there is no related record in the TCC transaction control table, it means that the two-phase Cancel method is executed before the phase Try method. Therefore, a record with status=4 is inserted. Then, when the phase Try method is executed, if status=4 is encountered, it means that the two-phase Cancel has been executed, and false is returned to prevent the phase Try method from succeeding.

Author Introduction

Zhang Chenghui, currently working at Ant Group, loves to share technology. He is the author of the WeChat public account "Advanced Backend," the author of the technical blog (https://objcoding.com/), and his GitHub ID is: objcoding.

· 8 min read

Seata AT mode is a non-intrusive distributed transaction solution. Seata internally implements a proxy layer for database operations. When using Seata AT mode, we actually use the built-in data source proxy DataSourceProxy provided by Seata. Seata adds a lot of logic in this proxy layer, such as inserting rollback undo_log records and checking global locks.

Why check global locks? This is because the transaction isolation of Seata AT mode is based on the local isolation level of supporting transactions. Under the premise of database local isolation level of read committed or above, Seata designs a global write exclusive lock maintained by the transaction coordinator to ensure write isolation between transactions. At the same time, global transactions are by default defined at the read uncommitted isolation level.

Understanding Seata Transaction Isolation Levels

Before discussing Seata transaction isolation levels, let's review the isolation levels of database transactions. Currently, there are four types of database transaction isolation levels, from lowest to highest:

  1. Read uncommitted
  2. Read committed
  3. Repeatable read
  4. Serializable

The default isolation level for databases is usually read committed, such as Oracle, while some databases default to repeatable read, such as MySQL. Generally, the read committed isolation level of databases can satisfy the majority of business scenarios.

We know that a Seata transaction is a global transaction, which includes several local transaction branches. During the execution of a global transaction (before the global transaction is completed), if a local transaction commits and Seata does not take any measures, it may lead to reading of committed local transactions, causing dirty reads. If a local transaction that has been committed before the global transaction commits is modified, it may cause dirty writes.

From this, we can see that traditional dirty reads involve reading uncommitted data, while Seata's dirty reads involve reading data that has not been committed under the global transaction, where the global transaction may include multiple local transactions. The fact that one local transaction commits does not mean that the global transaction commits.

Working under the read committed isolation level is fine for the vast majority of applications. In fact, the majority of scenarios that work under the read uncommitted isolation level also work fine.

In extreme scenarios, if an application needs to achieve global read committed, Seata also provides a global lock mechanism to implement global transaction read committed. However, by default, Seata's global transactions work under the read uncommitted isolation level to ensure efficiency in the majority of scenarios.

Implementation of Global Locks

In AT mode, Seata uses the internal data source proxy DataSourceProxy, and the implementation of global locks is hidden within this proxy. Let's see what happens during the execution and submission processes.

1. Execution Process

The execution process is in the StatementProxy class. During execution, if the executed SQL is select for update, the SelectForUpdateExecutor class is used. If the executed method is annotated with @GlobalTransactional or @GlobalLock, it checks if there is a global lock. If a global lock exists, it rolls back the local transaction and continuously competes to obtain local and global locks through a while loop.

io.seata.rm.datasource.exec.SelectForUpdateExecutor#doExecute

public T doExecute(Object... args) throws Throwable {
Connection conn = statementProxy.getConnection();
// ... ...
try {
// ... ...
while (true) {
try {
// ... ...
if (RootContext.inGlobalTransaction() || RootContext.requireGlobalLock()) {
// Do the same thing under either @GlobalTransactional or @GlobalLock,
// that only check the global lock here.
statementProxy.getConnectionProxy().checkLock(lockKeys);
} else {
throw new RuntimeException("Unknown situation!");
}
break;
} catch (LockConflictException lce) {
if (sp != null) {
conn.rollback(sp);
} else {
conn.rollback();
}
// trigger retry
lockRetryController.sleep(lce);
}
}
} finally {
// ...
}

2. Submission Process

The submission process occurs in the doCommit method of ConnectionProxy.

  1. If the executed method is annotated with @GlobalTransactional, it will acquire the global lock during branch registration:
  • Requesting TC to register a branch

io.seata.rm.datasource.ConnectionProxy#register

private void register() throws TransactionException {
if (!context.hasUndoLog() || !context.hasLockKey()) {
return;
}
Long branchId = DefaultResourceManager.get().branchRegister(BranchType.AT, getDataSourceProxy().getResourceId(),
null, context.getXid(), null, context.buildLockKeys());
context.setBranchId(branchId);
}
  • When a TC registers a branch, it obtains a global lock

io.seata.server.transaction.at.ATCore#branchSessionLock

protected void branchSessionLock(GlobalSession globalSession, BranchSession branchSession) throws TransactionException {
if (!branchSession.lock()) {
throw new BranchTransactionException(LockKeyConflict, String
.format("Global lock acquire failed xid = %s branchId = %s", globalSession.getXid(),
branchSession.getBranchId()));
}
}

2)If the execution method has a '@GlobalLock' annotation, the global lock is checked for existence before committing, and if it does, an exception is thrown:

io.seata.rm.datasource.ConnectionProxy#processLocalCommitWithGlobalLocks

private void processLocalCommitWithGlobalLocks() throws SQLException {
checkLock(context.buildLockKeys());
try {
targetConnection.commit();
} catch (Throwable ex) {
throw new SQLException(ex);
}
context.reset();
}

GlobalLock Annotation Explanation

From the execution process and submission process, it can be seen that since opening a global transaction with the @GlobalTransactional annotation can check if the global lock exists before transaction submission, why does Seata still provide a @GlobalLock annotation?

This is because not all database operations require opening a global transaction, and opening a global transaction is a relatively heavy operation that involves initiating RPC processes to TC. The @GlobalLock annotation only checks the existence of the global lock during the execution process and does not initiate a global transaction. Therefore, when there is no need for a global transaction but the global lock needs to be checked to avoid dirty reads and writes, using the @GlobalLock annotation is a lighter operation.

How to Prevent Dirty Writes

Let's first understand how dirty writes occur when using Seata AT mode:

Note: Other processes in the branch transaction execution are omitted.

When Business One starts a global transaction containing branch transaction A (modifying A) and branch transaction B (modifying B), Business Two modifies A. Business One's branch transaction A obtains a local lock before Business Two, waiting for Business One to complete the execution of branch transaction A. Business Two then obtains the local lock, modifies A, and commits it to the database. However, Business One encounters an exception during the execution of branch transaction A. Since the data of branch transaction A has been modified by Business Two, Business One's global transaction cannot be rolled back.

How to prevent dirty writes?

  1. Business Two uses @GlobalTransactional annotation:

Note: Other processes in the branch transaction execution are omitted.

During the execution of the global transaction by Business Two, when registering the branch transaction before the submission of branch transaction A and acquiring the global lock, it finds that Business One's global lock has not been released yet. Therefore, Business Two cannot commit and throws an exception to roll back, thus preventing dirty writes.

  1. Business Two uses @GlobalLock annotation:

Note: Other processes in the branch transaction execution are omitted.

Similar to the effect of @GlobalTransactional annotation, but without the need to open a global transaction, it only checks the existence of the global lock before local transaction submission.

  1. Business Two uses @GlobalLock annotation + select for update statement:

If a select for update statement is added, it checks the existence of the global lock before the update operation. Business Two can only execute the updateA operation after the global lock is released.

If only @Transactional is used, there is a possibility of dirty writes. The fundamental reason is that without the GlobalLock annotation, the global lock is not checked, which may lead to another global transaction finding that a branch transaction has been modified when rolling back. Therefore, adding select for update also has a benefit, which is that it allows for retries.

How to Prevent Dirty Reads

Dirty reads in Seata AT mode refer to the scenario where data from a branch transaction that has been committed is read by another business before the global transaction is committed. Essentially, this is because Seata's default global transaction isolation level is read uncommitted.

So how to prevent dirty reads?

Business Two queries A with @GlobalLock annotation + select for update statement:

Adding the select for update statement checks the existence of the global lock before executing the SQL. The SQL can only be executed after the global lock is acquired, thus preventing dirty reads.

Author Bio:

Zhang Chenghui currently works at Ant Group and is passionate about sharing technology. He is the author of the WeChat public account "后端进阶" (Backend Advancement) and the owner of the technical blog (https://objcoding.com/). He is also a Seata Contributor with GitHub ID: objcoding.