A CBDC transaction can be confirmed as valid without anyone outside the insiders knowing the amount.
This is the type of system that @SignOfficial is experimenting with permissioned deployments based on Hyperledger Fabric.
It sounds reasonable. But it's also when things start to get complicated.
Not everything is hidden. Just reveal what needs to be revealed.
In theory, CBDCs are always stuck between two extremes. One side is as transparent as RTGS, where banks see everything. The other side is as private as cash, where no one sees anything except the insiders. Most systems choose a point in between.
Sign goes another way. They split into multiple spaces.
wCBDC for interbank. rCBDC for users. A separate layer for regulators. Each namespace has its own endorsement policy; each transaction must be validated according to different rules.
Privacy is no longer an attribute attached to transactions. It is a consequence of which space the transaction belongs to. If in wholesale, the level of transparency is close to RTGS. If in retail, information is only visible to the sender, the receiver, and the designated regulatory authority.
There is no common setting. This approach allows each type of transaction to have a different level of privacy right from the architecture.
The second layer is how to handle transactions. The system utilizes Hyperledger Fabric Token SDK with a UTXO model. Each transaction consumes an old output and creates a new output. Combined with zero-knowledge proofs (ZKP), the system only proves what is necessary, not disclosing all data.
An example: a retail transaction can prove that it does not exceed the limit of 10,000 USD without revealing the exact amount, or prove that the recipient belongs to a qualifying group without disclosing the full identity. Verification still occurs, but the data is not fully revealed.
This type of permissioned deployment aims for throughput of ~100,000 transactions per second, suitable for interbank environments or national deployments. At that scale, 'hiding everything and decrypting when needed' is nearly impractical. Selective disclosure becomes mandatory.
But risks begin to emerge. Privacy by namespace raises the question: who decides which namespace you belong to? If a transaction is misclassified, the level of privacy is also incorrect. If the endorsement policy is skewed, the validation rights are skewed as well. This is not a cryptography error, but a governance error.
A deeper layer: regulators are no longer outside. They are granted access at the architectural level, necessary for compliance, auditing, and monetary policy. But assuming that access is always used for the right purpose is a real risk.
This system works when three things are balanced: sufficiently clear namespace separation, sufficiently strict policy, and correctly controlled access. It starts to have issues when one of the three is skewed: mixed namespace, loose policy, or access extending beyond expectations.
Privacy in the Sign Protocol is not a switch on/off. It is the result of how the system defines: who you are, where you are transacting, and which rules you are following. Not every transaction is private, and not every transaction is transparent. Each transaction is born with a predefined level of privacy.
It works if the architecture maintains boundaries between layers.
It fails if that boundary is breached.
That is also why I continue to monitor how designs like this are implemented in practice, where errors do not stem from code, but from how people define and operate the system.
$SIGN #SignDigitalSovereignInfra
