Most legal systems treat a signature as proof that someone agreed. Not proof that they understood. Not proof that the information they were given was accurate. Just proof that a pen touched paper, or a button got clicked.
That gap has always existed. It just was not visible before.
On chain attestation systems $SIGN make this more interesting to think about. The protocol creates a verifiable record that a signature happened. The record is tamper proof and timestamped. That part works.
But the record cannot capture the conditions around the signing. A document signed under pressure produces the same on chain output as one signed freely and with full understanding.
$SIGN proves execution. It does not prove intent.
That is not a criticism of the system. Every signing infrastructure has this limitation. What it does raise is a question worth sitting with. If we are building trust infrastructure at a government scale, at what point does proof of execution become enough, and when does it fall short of what trust actually requires. #SignDigitalSovereignInfra @SignOfficial #Sign
The Same Credential Can Be Built on Very Different Math and the Receiver Usually Cannot Tell
I was going through Sign Protocol's documentation and something kept pulling at me. The protocol supports three different signature standards. ECDSA, EdDSA, and RSA. That sounds like a flexibility advantage, and maybe it is. But the longer I sat with it, the more I started thinking about the person on the other end of that credential and what they actually know when it arrives. The credential looks the same regardless of which method produced it. That is the part I could not stop thinking about. Sign Protocol is an attestation system. Someone issues a signed data record, anchors it to a chain or a storage layer like Arweave, and anyone can theoretically verify it. The use cases they point to are broad. Identity, audit proofs, cross-chain reputation. The word sovereign shows up a lot in their materials. Whether the current system actually lives up to that framing is a separate question I keep sitting with. To understand the signature problem it helps to follow what actually happens when an attestation gets made. An issuer builds a schema, which is basically a template for what fields the record will carry. They fill it in, sign it, and it gets stored somewhere. On an EVM chain like Ethereum or BNB Chain, the record goes through a smart contract. Off-chain records go to Arweave or IPFS. Here is where it gets interesting. On EVM chains there is no algorithm choice happening. Ethereum runs on secp256k1 ECDSA and the smart contract recovers the attester address through ecrecover, which only works with that one algorithm. So every EVM on-chain attestation is ECDSA by default. The chain decides that, not the protocol.
On TON it is Ed25519 because that is what TON uses natively. Solana is referenced in their documentation as something coming but I could not find a live implementation. In every on-chain case the host chain is picking the scheme. Sign Protocol is inheriting that choice, not making one. The off-chain layer is different. There is no precompile forcing anything so the design is genuinely open. Older posts from the team describe it as able to support RSA, other ECDSA variants, EdDSA, even zero knowledge proofs as a form of consent. The phrasing they used was can easily add support for, which to me reads more like a design ambition than something you can go use today. Looking at the SDK the only off-chain signing option I could find documented was EIP-712, which is still secp256k1. Whether RSA is actually available somewhere in the tooling I could not confirm. I am not certain it is. So the multi-algorithm story is architecturally plausible but practically thin, at least from what is publicly documented. Now the part that kept me coming back to this. Even if all three schemes were fully deployed and working, what does the verifier actually see when a credential arrives? I went through the attestation struct and the schema struct in the contracts. Neither has a field for the signature algorithm. There is a field for where the data lives, one for the attester's address, timestamps, revocation status. Nothing that says this was signed with EdDSA or this was RSA-2048. The schema definition is just a free-form string. The protocol does not standardize an algorithm identifier anywhere that I could find. For on-chain EVM attestations that is not a real problem because there is only one option and the chain enforces it. But for off-chain attestations the receiver either has to know through some separate channel what scheme was used or the application has to sort it out on its own. The protocol does not hand that information over. This matters more than it might seem. ECDSA uses a random nonce every time it signs something. If the random number generator was weak when that signature was produced, the private key can potentially be recovered from the signature itself. That has actually happened in deployed systems. EdDSA does not have that problem because the nonce is derived deterministically from the key and the message. There is no randomness to fail. RSA works differently again and produces signatures so large that verifying them on-chain is not really practical.
These are meaningfully different security profiles. A receiver looking at an off-chain attestation and not knowing which scheme produced it is implicitly trusting whichever one the issuer chose. If it was ECDSA and the issuer had a bad RNG moment, the guarantee is weaker than it looks. The credential gives no signal either way. I do not think Sign Protocol is uniquely bad here. A lot of credential systems handle algorithm transparency poorly. But a protocol positioning itself around government-grade trust infrastructure probably needs to be more explicit about this than most. I also could not find a published security audit. OtterSec comes up in their materials but as a user of the system for publishing their own audit reports, not as someone who audited Sign Protocol's contracts. That might have changed. I just did not find it. The thing I keep returning to is simple. Flexibility across signature schemes is only useful if the receiver can tell which scheme they are dealing with. Without that, the flexibility mostly benefits the issuer and the receiver is left making assumptions. In a trust system, that feels like the wrong side to leave in the dark. @SignOfficial $SIGN #SignDigitalSovereignInfra #Ethereum
A contract written in one country often cannot be used directly in another. The legal terms, the definitions, the assumptions about what counts as proof or consent, all of these are shaped by the jurisdiction where the document was created. Moving the document across a border does not move the legal context with it.
This is a problem that exists in paper systems. I think it becomes more interesting in digital ones.
$SIGN uses a schema registry where attestations follow structured templates. A schema defines what fields a credential contains and what those fields are meant to represent. Any institution can register one.
But a schema written in a jurisdiction where identity means a government issued biometric record carries different assumptions than one written where identity means a community attestation or a tax number. The field names might be identical. The meaning of what goes inside them is not.
Sign Protocol can standardize the format. Whether it can standardize the meaning behind the format is one question. Whether a verifier in a different jurisdiction will recognize the issuer as trustworthy under their local law is a second question that sits entirely outside the protocol.
A Revoked Record Is Not a Deleted One and That Difference Matters More Than It Sounds
A small example makes this easier to see. When you cross out a word in a notebook, the word is still there. You can see what was written, you can read it, and you know it existed. The line through it tells you someone decided it was wrong. It does not make it gone. @SignOfficial From what I can tell, parts of Sign Protocol seem to work in a way that resembles that crossed out word. The protocol can create on-chain attestations, depending on how data is stored. When an issuer makes a claim about someone, that claim is written permanently to the blockchain. The documentation is fairly direct about this. It describes attestations as tamper-proof and finalized. The design is intentional. The point is that records cannot be quietly changed or erased after the fact, which matters enormously for accountability and trust. But what happens when a record is wrong from the start? Based on the documented architecture, there doesn’t appear to be an update or delete function. The only corrective action available is revocation. When an issuer revokes an attestation, two things change inside the record: a flag flips from false to true, and a timestamp gets added showing when revocation happened. Everything else stays exactly as it was. The original data, the attester address, the recipient address, the schema, the creation time. All of it appears to remain visible across the networks the protocol currently supports, which currently includes Ethereum, Base, Arbitrum, Polygon, and about ten others. The prescribed correction pattern, which the documentation doesn’t seem to fully formalize as a workflow, involves revoking the wrong attestation and creating a new correct one that links back to the old. The linkedAttestationId field is the mechanism for this. It is described in the documentation in roughly one sentence. What you end up with is two permanent records on-chain: the wrong one and the correction. The wrong one still exists. The correction sits alongside it. Anyone with blockchain access can read both.
I want to be careful not to make this sound more alarming than it is. Traditional record systems also struggle with corrections. Medical records in existing hospitals carry a similar structure, where errors are documented as amendments rather than replacements, and the original wrong entry stays in the chart. Criminal records in many countries persist even after exoneration. Credit reports carry disputed items rather than deleting them. The problem of permanent incorrect records is not something blockchain invented. But what blockchain changes is the scale and distribution of persistence. When an incorrect record lives in one hospital's database, fixing it means finding that database and updating it. When an incorrect attestation lives on fourteen public blockchains simultaneously, the wrong record exists in every node of every network, permanently, with no administrator who can overwrite it. This sits in tension with something I think Sign Protocol has not yet fully addressed. In Europe, there’s a concept often referred to as the ‘right to erasure. The idea is that a person has the right to have their personal data genuinely removed under certain conditions. In April 2025, the European Data Protection Board issued guidelines on blockchain and personal data processing that were quite direct. The guidelines said technical impossibility cannot be used to justify non-compliance, and that if GDPR-compliant processing cannot be achieved, blockchain should not be used for that particular purpose. The guidelines also noted that if data is stored on-chain using certain commitment schemes where both the data and its key are deleted, the on-chain commitment may become meaningless enough to satisfy erasure requirements. I couldn’t find any indication that Sign Protocol implements or documents that approach. I didn’t come across references to GDPR in the documentation, no discussion of the right to erasure, and no guidance on what a government deploying its identity infrastructure through Sign Protocol should do when a citizen requests that incorrect personal data be removed. I find this gap more interesting than troubling, at least for now. The project's current government agreements are with Kyrgyzstan and Sierra Leone, neither of which falls under European data protection law. The Abu Dhabi partnership is still at the strategic positioning stage. So the tension between immutability and erasure rights has not yet collided with a real deployment. But the stated ambition is to serve sovereign nations broadly, which would eventually include EU member states. The hybrid storage option is worth mentioning because it changes the picture slightly. Some attestations may not live directly on-chain. They’re stored on IPFS, a separate network. The issue is, IPFS doesn’t promise permanent storage. If no one continues to pin the data, it can disappear over time. When that happens, the attestation still shows up on-chain, but the actual content behind it is no longer retrievable.. The on-chain record would still carry a hash reference to what the data was, but the data itself would be gone. This creates a narrow path toward a form of de facto erasure. The documentation recommends against IPFS for reliability reasons and recommends Arweave instead, which is explicitly designed for permanent storage. What I keep returning to is the asymmetry between who benefits from immutability and who bears the cost when something is wrong. Immutability benefits verifiers, institutions, and systems that want assurance that a record has not been tampered with after the fact. The cost of an incorrect but immutable record falls entirely on the person that record is about. That person cannot force the incorrect claim to disappear. They can only hope the issuer revokes it and that every system relying on the record checks the revocation flag and honors it.
The history of incorrect records in traditional systems suggests that corrections often reach fewer people than the original errors. Credit agencies that purchase data in bulk have been repeatedly found reinserting previously corrected items after updates. Background check companies that aggregate public records often lag months behind official expungements. The mechanism of correction exists but the propagation of the correction is uneven. On a public blockchain, the propagation problem runs in the opposite direction. The incorrect record propagates instantly and completely to every node on every supported network. The correction, a new attestation linked back to the old one, requires every verifier to implement their own logic to find it, traverse the link, and prefer the corrected version. There is no standard for how that traversal should work. There is no enforcement. I do not think this disqualifies Sign Protocol as a useful attestation layer. The problem of incorrect records is not unique to it, and for many use cases, the immutability is genuinely valuable. But if the protocol is going to be used for national identity systems where incorrect attestations could affect whether a citizen accesses healthcare, benefits, or legal status, then the question of what happens when the record is wrong may require a more complete answer than the current documentation seems to provide. It might be worth asking whether the design philosophy of immutability was built for the use cases Sign Protocol started with, and whether the sovereign infrastructure ambitions require a different set of guarantees than the original architecture was designed to provide. #SignDigitalSovereignInfra $SIGN
The system says "approved." But both sides meant something different. No one caught it. @SignOfficial Two hospitals use the same form. One calls a managed illness a "prior condition." The other doesn't. The form looks the same. But the meaning is different. The document passes between them looking fine while hiding a disagreement inside.
This is not a tech problem. It is a meaning problem.
Sign Protocol lets anyone create a schema a template for storing information. Others can build on top of it. Same format, used by everyone.
But a template for "employment" or "identity" doesn't explain what those words mean. It just has fields for them.
So two organizations can use the same $SIGN template but fill it with completely different ideas. And the system won't flag anything wrong.
The data looks correct. The disagreement stays hidden.
Same tool does not mean same understanding.
Can a schema ever fix that or does it only fix the format?
The Attestation Still Says Valid. The Fact It Describes Stopped Being True Three Months Ago.
@SignOfficial I want to start with something small. When you go to renew a professional license, the renewal board does not just check that you passed the original exam. It checks what happened between then and now. Whether you completed continuing education. Whether any complaints were filed. Whether your circumstances changed in ways that affect whether you should still hold that credential. The original qualification is not the whole picture. What happened after it matters just as much. This is the gap I have been thinking about after spending time inside Sign Protocol's attestation architecture. Sign Protocol records claims on chain. When an attester issues an attestation, the protocol captures three pieces of temporal information. There is the attestTimestamp, which records when the attestation was created. There is validUntil, which is an expiry timestamp the issuer sets at the time of creation, and zero means it never expires. And there is revokeTimestamp, which records when an attestation was actively revoked. These three fields define the temporal vocabulary of the protocol. What I find interesting is what happens when block.timestamp passes the validUntil value. Nothing, at the protocol level. The revoked field stays false. No state changes. No event fires. The attestation sits there looking identical to a valid one unless the person reading it explicitly checks whether the timestamp has passed. Expiry is data the protocol stores. It is not something the protocol enforces. Revocation is a different and more deliberate act. When an issuer calls the revoke function, the revoked field flips to true and the revokeTimestamp populates with the current block time. The record does not disappear. The attestation remains on-chain with all its original data, now annotated with the fact that it was revoked. Sign Protocol's FAQ describes attestations as meant to be final and immutable. Revocation does not delete. It annotates.
What this means is that Sign Protocol handles two out of three validity failure modes. It handles active revocation cleanly. It handles time based expiry as a data field that verifiers must check themselves. What it does not handle at all is the third category, which is the one I keep coming back to. That third category is when a credential becomes factually wrong because circumstances changed, and no one triggered a revocation. Someone is attested as eligible for a government benefit in January. In March their income changes and they no longer qualify. The issuer does not know this has happened. The attestation still reads as valid. The revoked field is still false. The validUntil timestamp has not passed. From the protocol's perspective, nothing has changed. From reality's perspective, the credential is no longer accurate. There is no mechanism inside Sign Protocol that can detect or flag this condition. The burden of knowing that circumstances changed falls entirely on the issuer, who has to revoke the old attestation and issue a new one. If the issuer does not know, nothing happens. This is not a flaw I invented. The US Social Security Administration has been dealing with the exact same problem in a traditional system for decades. A report from the SSA's Inspector General found billions of dollars in improper payments between 2015 and 2022, most of them occurring because eligibility determinations drifted from reality between the moment they were made and when benefits were distributed. People's circumstances changed. The records did not. The payments continued. Medical licensing produces a similar picture from a different angle. Investigations have found cases where physicians surrendered a license in one state after complaints or disciplinary action, then continued practicing in another state where that history was not visible. The credential in the new state was cryptographically valid in the sense that the license was real and the issuer was legitimate. The underlying situation it was supposed to represent had changed in ways the second state's verification system could not detect. Sign Protocol's schema hooks offer one architectural surface for building around this problem. Hooks are smart contracts that execute on attestation creation and revocation. They can enforce conditions, reject invalid data, and fire notifications. A sophisticated implementation could connect hooks to oracles that periodically check whether conditions underlying an attestation still hold. But hooks fire on writes, not reads. There is no hook that runs when someone queries an attestation to verify it. The verification moment is outside the hook system entirely. There is a workaround pattern in the protocol documentation that involves revoking a stale attestation and linking a new one to the old record using the linkedAttestationId field. This creates a correction chain that is auditable and on chain. But the precondition for this working is that the issuer knows a correction is needed. Which brings the problem back to the same place. The protocol can record that a correction happened. It cannot detect that one is needed. The W3C Verifiable Credentials specification handles this differently, at least conceptually. It distinguishes between verification, meaning the cryptographic checks, and validation, meaning whether the claims are acceptable for a given use case. It also defines a refresh mechanism that signals to verifiers that an updated credential may be available from the issuer. None of this solves the underlying problem of detecting semantic staleness automatically, but it at least names the distinction explicitly. Sign Protocol's architecture largely inherits this limitation without naming it, which I think makes it harder to reason about.
What I keep coming back to is that Sign Protocol is building toward government-scale credential infrastructure. Sierra Leone, Kyrgyzstan, Abu Dhabi. Identity systems, benefits distribution, eligibility attestations. The environments where the gap between issuance validity and present validity is most consequential are exactly the environments Sign Protocol is pursuing. If someone's eligibility for housing assistance or food subsidy is recorded as an on-chain attestation, and their circumstances change in ways the issuer does not immediately know about, the protocol has no mechanism to bridge that gap. The credential will verify correctly. The underlying fact it describes may not still be true. I do not think this is a reason to dismiss the project. The same limitation exists in every credential system, digital or paper. Traditional systems have not solved it either, as the SSA numbers demonstrate. What I think it means is that the application layer design built on top of Sign Protocol matters enormously, perhaps more than the protocol layer itself. The hooks, the oracle integrations, the periodic refresh cycles, the re-verification logic that someone has to build into the systems using Sign Protocol for real government services. The protocol provides the vocabulary. Whether those systems are designed to catch semantic staleness is a different question entirely. Whether that design work is happening alongside the infrastructure agreements, or whether it will be figured out later, is something I cannot tell from looking at the protocol alone. $SIGN #SignDigitalSovereignInfra
"🚀 Is Pi Coin set for another surge? After days of tight trading, an upcoming bullish setup could mirror the 76% rally from March! Stay tuned for the breakout potential. 📈
Most credentials are checked once, at the moment they are issued.
A background check runs, a status is confirmed, a document gets signed.
The system records that everything was valid at that specific point in time.
The problem is that circumstances change. Someone qualifies for housing assistance today and loses that eligibility in three months. A business license is valid at issuance and lapses six weeks later. The original credential still exists. It still verifies correctly. It just no longer reflects reality.
This is the gap I keep thinking about with $SIGN . The protocol does one thing well it records that a specific attester made a specific claim at a specific moment, and that record is cryptographically immutable. You can always verify the original attestation.
@SignOfficial What the attestation cannot do is update itself. The schema captures what was true at issuance. It has no mechanism for what happened after.
Most real world eligibility disputes do not live at the moment of issuance. They live in the six months that follow it.
One Schema, Four Countries, Four Different Ideas of What Identity Means
I started looking into this because, the geography of it caught my attention. @SignOfficial Sign Protocol, in a relatively short period, has announced government level engagements across Sierra Leone, the UAE, Thailand, and Barbados. Four countries on four different continents, with four very different relationships to statehood, digital infrastructure, and what it means to prove who you are. I wanted to understand what the same attestation schema actually looks like when it has to hold meaning across all of them. The first thing I noticed when I went looking for verification is that the four countries are not equally real as partnerships.
Sierra Leone is the most concrete. In November 2025, Sign's CEO Xin Yan and Sierra Leone's Minister of Communication, Technology and Innovation signed a memorandum of understanding covering blockchain-based national digital identity, a digital wallet platform, stablecoin payments, and asset tokenization. The ministry posted about it. Independent outlets covered it. Named officials were present. This one happened. What it has not done yet is move past the MOU stage into any actual deployment. No system is running. Sierra Leone also sits below the African average on the UN's e-government index, with roughly 85 percent of citizens lacking internet access. That context matters when imagining what a credential infrastructure rollout looks like in practice. The UAE situation is different. What Sign announced there is a partnership with The Blockchain Center Abu Dhabi, which is a private sector blockchain accelerator, not a government body. It has connections to government-adjacent institutions and can provide introductions, but it is not a government agency signing anything on behalf of the UAE state. The press release framing it as sovereign infrastructure engagement was a stretch. The partnership exists. The government dimension of it is much softer than how it has been described. Thailand I could not verify at all. No named agency. No named official. No announcement date. No Thai government source confirming anything. The claim appears in multiple crypto media articles in identical phrasing, all traceable back to Sign's own press materials. The Tiger Research report on Sign, which is the most detailed analysis available and ran to considerable length on the Kyrgyzstan and Sierra Leone deals, does not mention Thailand once. When Sign has real agreements, it shows them. There is a photo of the Sierra Leone signing. CZ was physically present in Bishkek for the Kyrgyzstan agreement. Nothing comparable exists for Thailand. Barbados is even softer. It appears in one sentence in syndicated marketing copy as an expansion target alongside Singapore. No deal, no discussions, no government body named. So the four-country geographic spread, which sounds like a global deployment, is actually one MOU not yet in implementation, one private-sector intermediary deal, one unverified claim, and one aspirational mention. That is worth knowing before thinking about the technical question underneath it all.
The technical question is the one I find more interesting anyway. Sign Protocol's schema system is what makes cross-border credential infrastructure possible in theory. A schema is essentially a template that defines what fields an attestation must contain and what format the data takes. A national identity schema might require name, date of birth, citizenship status, and a biometric reference. That schema gets registered on-chain and any attestation issued under it has to follow the same structure. The schema registry lives on-chain and is publicly queryable. The design is genuinely elegant for the problem it solves within a single jurisdiction. The schema is consistent. The attester ID is traceable. The record cannot be quietly altered. If a government registers a schema and issues credentials through it, a verifier anywhere with access to the chain can confirm the cryptographic integrity of the attestation. The part that gets complicated is what happens across borders. Sign Protocol can provide what researchers call structural interoperability, meaning the data format is consistent and readable across chains. What it cannot provide on its own is semantic and legal interoperability, meaning whether an attestation issued by Sierra Leone's government actually means anything to a verifier in Abu Dhabi or Bangkok. That requires bilateral agreements between governments about what they will accept from each other, which have nothing to do with the protocol. This is not a problem Sign Protocol invented. The EU's blockchain identity infrastructure, backed by 29 countries with shared regulatory frameworks and years of coordinated effort, continues to face governance onboarding issues and wallet compatibility problems. Countries with shared legal traditions and aligned data protection laws still struggle to make credentials mean the same thing across their borders. Sign Protocol's ambition to do this across Kyrgyzstan, Sierra Leone, the UAE, and Caribbean island nations simultaneously is asking the protocol to substitute for diplomatic and legal alignment that does not yet exist.
The schema can be the same. What the schema proves is a different question in each country, because what each country accepts as evidence of identity, eligibility, or citizenship is defined by its own laws and institutions. A schema is a container. The meaning of what goes inside it is set by people, not by the protocol. None of this means the project is without substance. Sign Protocol has processed over six million attestations. It has real backers including Sequoia and YZi Labs. Its Kyrgyzstan agreement, signed with the National Bank's deputy governor in October 2025, is the most operationally specific government deal I found, with a pilot timeline and a stated legal tender date for the Digital Som if the pilot succeeds. The technology works as attestation infrastructure. What I keep thinking about is the gap between the infrastructure and the claim being made about it. Saying a schema can hold meaning across four countries is saying that the technical container travels. It does. What I am less sure about is whether the meaning travels with it, and whether that distinction is being clearly communicated to the people who need to understand it most, which is the governments being asked to build national systems on top of it.$SIGN #SignDigitalSovereignInfra
Most Infrastructure Projects Quietly Hope A Major Centralized Exchange Will List Them.
It validates the project, brings liquidity, and puts it in front of millions of people who would never have found it otherwise.
The exchange becomes the onramp..
The tension is that the onramp shapes who arrives and how.
Coinbase just added $SIGN to its listing roadmap. Sign Protocol is building what it describes as sovereign attestation infrastructure, credential systems for governments, identity layers designed to reduce reliance on centralized intermediaries. The project's own framing is about moving trust out of institutions and into verifiable on-chain records.
@SignOfficial The mechanism at the center of this is the schema registry. Attestations on Sign Protocol follow templates that any party can register publicly. The issuer is on-chain. The claim is verifiable by anyone. No single platform controls the record.
And yet here is a centralized exchange serving as the main gateway to that infrastructure. Most People Who Hold $sign After This Listing Will Have Bought It Through Coinbase. That Means Coinbase's Kyc Process, Coinbase's Jurisdiction Restrictions, And Coinbase's Decisions About Who Can Access The Asset All Come First.
Whether that is a contradiction or just how adoption works in practice is something I keep turning over.
The Citizen Holds the Credential. The State Decided What It Says.
@SignOfficial When a government rolls out a new ID system, there is usually a press release about empowering citizens. The language is almost always the same. People will have more control over their data. Services will be more accessible. The system will be more efficient. What the press release does not say is who the real customer is in this arrangement. I have been sitting with this question while looking at what Sign Protocol is actually building. The project started as a document signing tool, then became an attestation protocol, and has now repositioned itself around something it calls S.I.G.N., which stands for Sovereign Infrastructure for Global Nations. The name tells you most of what you need to know about where the focus has moved. The customers being pursued are not individual users. They are governments. From what I can verify, Sign Protocol has signed a technical agreement with the National Bank of Kyrgyzstan to build the pilot platform for the country's digital currency, the Digital Som. There is a non-binding memorandum with Sierra Leone's Ministry of Communication covering digital identity and a payment system. There is a strategic partnership with a blockchain hub in Abu Dhabi that connects the project to government stakeholders in the region. These are early stage. None of them are live deployments yet. But the direction is clear enough. The pitch to governments is straightforward. Many developing countries do not have the digital infrastructure to run modern identity systems, payment rails, and public benefits programs efficiently. Sign Protocol offers to build all three on a single connected stack. Identity credentials, a programmable currency layer, and a distribution system for payments and government benefits, all running on the same underlying technology. For a government trying to modernize quickly, that is an appealing offer. But it is worth slowing down and looking at how this actually works, because the design choices reveal something about where the power sits. When a government uses Sign Protocol to issue a national ID credential, the government first has to register a schema. A schema is a template that defines what information the credential will contain. What fields are required. What format the data takes. What conditions have to be met before a credential gets issued. The government creates this schema and registers it in Sign Protocol's schema registry. After that, every credential issued under that schema follows the government's rules.
The citizen receives the credential into a digital wallet. They can choose which parts of it to share with different services and they can use zero-knowledge proofs to prove certain things, like being above a certain age, without revealing the underlying data. This is described as giving people control over their own information, and technically it is accurate. The citizen does have some real choices in how they present what they have. What the citizen did not have any say in is what the credential contains in the first place. They did not decide which attributes the government chose to include. They did not set the conditions for issuance. They did not write the schema. The government negotiated with Sign Protocol, defined the data structure, and now issues credentials that citizens carry. The citizen is the end user of a system they were not party to designing. This matters more when you consider what happens when all three components are connected. If identity credentials, the digital currency, and access to government benefits all run on the same infrastructure, then the government has a single point of control over all three. Issue a credential, link it to a wallet, route benefits through that wallet. Every step is connected. The system is efficient by design. Efficiency is genuinely valuable. There are real problems with how governments in many countries currently distribute benefits or verify identity. Paper-based systems lose people. Corruption eats into funds. Delays cause real harm. I do not want to dismiss the genuine case for building better infrastructure. But the architecture of this particular kind of efficiency creates something that did not exist before. The government that controls credential issuance also controls what the credential says. The government that controls the schema also controls whether someone is recognized as eligible for services. If the credential can be revoked, the government can revoke it. If the wallet is tied to the credential and the credential is revoked, the person loses access to both at once. Sign Protocol's own documentation describes emergency controls and the ability for sovereign operators to manage upgrades and revocation authority as features. That is accurate. But features in a government's hands look very different from features in a user's hands. I am not saying the intent is surveillance or control. I genuinely do not know what the intent is in Kyrgyzstan or Sierra Leone, and I am skeptical of anyone who claims certainty about how these systems will be used once deployed and operational. Early-stage agreements tell you almost nothing about governance practices years from now. What I can say is that the architecture makes certain outcomes structurally possible that were not previously possible with the same ease. Linking identity to money to benefits on a queryable on-chain system creates a kind of visibility into people's lives that traditional bureaucratic systems, with their siloed databases and manual processes, could not easily produce. Sign Protocol's attestation explorer, SignScan, indexes and makes attestation records queryable through a public API. That is useful for verification. It is also useful for tracking. The language around self-sovereign identity, which is the idea that individuals should own and control their own credentials, has been part of the blockchain identity space for years. The S.I.G.N. framework inherits some of that language. But when the sovereign in question is literally a nation state, and when the state is the one defining what credentials exist and what they prove, the concept of self-sovereignty starts to describe something fairly narrow. You can choose when to present your credential. You cannot choose what it says about you.
I find myself wondering what it would mean for a credential system built on behalf of a government to genuinely serve the citizen as its primary stakeholder rather than the state. Not as a rhetorical question, but as a design question. What would have to be different. Who would have to be in the room when the schemas are written. What recourse would a person have if their credential was revoked without clear cause. Sign Protocol is building infrastructure. Infrastructure does not answer these questions. But whoever decides how the infrastructure gets used will. $SIGN #SignDigitalSovereignInfra
Most token unlock schedules get described as alignment mechanisms. The idea is that if you cannot sell immediately, you are incentivized to care about the long term. That framing shows up in almost every project's tokenomics section.
The Problem Is That A Lock Does Not Change What Someone Wants. It Changes When They Can Act On It.
Midnight distributed $NIGHT across 8 million wallets with a 450-day gradual unlock. The release is slow by design, spread across a timeline that extends well past the initial launch period.
The mechanism is straightforward. Tokens thaw in portions over time rather than all at once. Each portion that unlocks gives the holder a decision point rather than a single exit moment.
Whether that produces patient holders or just staggered selling is harder to say. The schedule shapes behavior at the margins but it does not rewrite motivations.
What it does create is a long window of structured uncertainty. Whether that window is enough time for the network to build something worth waiting for is the real question. @MidnightNetwork #night
Midnight Built Its Architecture for Enterprise. It Distributed Its Token to 8 Million Crypto Wallets
@MidnightNetwork There is something that does not quite fit when you sit with Midnight's positioning long enough. The network is built around zero knowledge proofs, selective disclosure, and programmable privacy for sensitive data. The documentation talks about regulated industries. Finance. Healthcare. Identity verification. The kind of use cases where a legal team needs to sign off before anything goes to production. The token distribution reached 8 million wallets. Those two things do not obviously belong to the same story. Eight million crypto wallets means retail. It means people who bought NIGHT the way people buy any new token, because it was available, because someone in a Telegram group mentioned it, because the chart looked interesting. It means an audience that is broadly familiar with how crypto works but is not thinking about enterprise compliance infrastructure when they wake up in the morning. The architecture Midnight is building is pointed somewhere else entirely. The system, at its core, is designed so that an application can prove something about private data without exposing the data itself. A hospital could verify a patient's eligibility without sharing the patient's records. A financial institution could confirm a counterparty meets regulatory thresholds without handing over the full onboarding file. A company could demonstrate compliance to an auditor without opening the underlying documents to inspection. These are real problems in regulated industries and the zero knowledge approach is a genuinely useful way to think about them.
The workflow for an enterprise using Midnight would look roughly like this. An application holds private data locally. It runs computation through Compact, Midnight's developer language. That computation generates a proof confirming whatever condition needs to be confirmed. The proof goes on-chain. The raw data never does. The counterparty or auditor or regulator sees a verified result without seeing what went into it. That workflow is clean in the abstract. In practice, it requires an enterprise to trust the architecture, get legal comfortable, pass a security review, and integrate the toolchain into existing systems. That is a long sales cycle. That is a procurement process. That is months, sometimes years, between a developer building a prototype and a company actually running something in production. The retail wallet holder who received $NIGHT in a distribution is not waiting for that cycle to complete. They are watching price, watching volume, watching whether the project is still being talked about in six months. The horizon is different. The information they care about is different. The reasons they would hold or sell the token have almost nothing to do with whether a healthcare company in Frankfurt eventually deploys a selective disclosure application on the network. This is not unusual in crypto. Most networks carry this split to some degree. Infrastructure projects raise awareness through retail distribution and then point at enterprise adoption as the long term thesis. The token goes out broadly. The actual use case is narrower and slower to arrive. What is slightly different with Midnight is how wide that split feels given the specificity of the architecture. This is not a general purpose smart contract platform that could plausibly serve both audiences through different applications. The privacy tooling, the compliance framing, the selective disclosure mechanics, the Compact language design decisions, all of it is oriented toward a particular kind of sensitive, regulated, high stakes data environment. The architecture has a point of view about who it is for. Eight million wallets is a different population. Some portion of those wallets belong to people genuinely interested in privacy technology and its implications. Some belong to developers who will read the documentation and have opinions. But a meaningful number are probably held by people who will never interact with the network in any technical sense, who are there because NIGHT was distributed and holding seemed like the path of least resistance.
Neither of these groups is wrong to exist. The retail distribution builds awareness and liquidity. The enterprise focus builds the case for long term value. In theory those things compound. The challenge is that they operate on completely different timescales and respond to completely different signals. An enterprise customer deciding whether to build on Midnight wants to see documentation quality, developer support, network stability, regulatory clarity, and evidence that other institutions are moving in the same direction. A retail holder watching the same network wants to see price action, exchange listings, partnership announcements, and some signal that the project is still alive and moving. The information that satisfies one audience barely overlaps with what satisfies the other. This creates a communication problem that is hard to solve well. Leaning into the enterprise story makes the network sound slow and institutional, which does not hold retail attention. Leaning into retail momentum and token price narrative makes it harder to be taken seriously in the regulated environments the architecture is actually built for. Most projects end up doing both at once and slightly satisfying neither. Midnight's team is clearly aware of this tension at some level. The documentation is technical and serious. The token distribution was broad. Whether those two things are being managed as parts of a coherent strategy or whether they are just coexisting while the team focuses on what it can control, I genuinely cannot tell from the outside. What I keep thinking about is the enterprise customer sitting across the table from a Midnight integration pitch sometime in the next year or two. They are doing their due diligence. They are looking at who else holds the token, what the trading activity looks like, what the retail conversation around NIGHT sounds like on social media. Enterprise procurement people are not immune to optics. A token with 8 million holders spread across retail crypto culture is a different looking counterparty than a focused infrastructure project with a small, deliberate stakeholder base.
Whether that optics gap matters in practice probably depends on how far along the enterprise pipeline actually is by the time anyone is sitting across that table. The thing I find myself wondering is simpler than any of this. When Midnight imagines a customer, which one are they actually picturing? #night
Hardware cycles don’t move for any single use case. GPUs get better because something else demands it lately, that “something” has been AI training, not cryptography.
That creates a quiet mismatch. Systems that depend on heavy computation inherit the pace and priorities of an industry they don’t shape. Efficiency, in that sense, isn’t entirely internal.
MidnightNetwork seems to sit in that gap. The model leans on zero-knowledge proofs, but the long-term cost assumptions appear tied to GPU improvements happening elsewhere.
One simple piece of this: proof generation. It’s computationally expensive today, but the expectation is that better parallel hardware will compress those costs over time, making privacy-preserving transactions more practical for $NIGHT .
But if the cost curve is partially external, then optimization isn’t just a protocol question. It becomes a question about alignment with another industry’s trajectory.
So the system isn’t only scaling with its own design it’s also waiting on progress it doesn’t control. #night @MidnightNetwork
Midnight Spent Three Phases Building a Chain. The Final Phase Describes Something Else Entirely.
There is a pattern in protocol roadmaps that I have noticed more than once. The thing being built in the early phases is not quite the same thing being described in the later ones. The naming stays consistent. The branding stays consistent. But the product has shifted somewhere along the way, and the shift is not always announced clearly. @MidnightNetwork Midnight's final roadmap phase, called Hua, is where I noticed this.
The earlier phases read as inward work. Get a privacy chain running. Get the validator infrastructure stable. Get zero-knowledge proofs working in a way developers can actually use. Get the Compact toolchain to a point where someone without a cryptography background can build with it. Each piece is pointing at the same thing a functioning network people come to.
Hua is pointed somewhere else. The plan connects Midnight to Ethereum and Solana through LayerZero. After that connection exists, an application on another chain could theoretically route sensitive computation through Midnight without deploying anything on Midnight directly. The privacy layer becomes something external systems call into. Not a destination. More like a service sitting behind other things.
That gap between the two descriptions is what I keep thinking about.
A chain people come to has a certain logic. Applications build up on it. Users follow the applications. The network accumulates value because of what lives there. The earlier roadmap phases are pointing at that. Hua is pointing at something different a network whose value is in what it can do for ecosystems that already have users elsewhere.
The rough flow in that world would look something like this. An application on Ethereum needs to handle something sensitive. It constructs a message and sends it through LayerZero to Midnight. Midnight processes the private computation, generates a proof, and passes a verified result back. The Ethereum application receives a confirmed outcome. The private inputs never moved beyond Midnight. The user on Ethereum sees a result. They may have no visibility into where the proof came from.
Whether that works cleanly in practice is something I genuinely cannot tell from the documentation. The architecture makes sense on paper. Whether the latency, the bridging assumptions, and the developer experience actually hold together under real usage is a different question.
What I find harder to think through what $NIGHT looks like in that setup.
On a standalone chain, the token sits close to the activity. Validators hold it. Computation costs flow through DUST, which derives from it. Someone using the network is, at some level, touching the token economics whether they think about it or not.
In a service model, that closeness gets harder to see. A developer on Ethereum integrating Midnight's proof layer is probably thinking about whether it is reliable and whether it is cheap. The token sitting underneath the architecture may be real, but it is further from the surface. Something that gets abstracted away rather than something the builder consciously interacts with.
I am not sure what that means for the incentive structure over time. Whether validator rewards hold up when the primary users are not Midnight-native. Whether the DUST mechanics behave the same way when demand comes from cross-chain routing rather than direct usage. These feel like open questions rather than things the documentation has settled.
There is also a question about what it means to describe both of these things a standalone privacy chain and a privacy layer for other chains as stages in the same continuous roadmap. Maybe they are genuinely continuous. Maybe building the standalone chain first is the right foundation for the service model later. That reading is possible.
But a network that ends up used mainly as cross-chain infrastructure is a different thing in practice from one used as a destination. The people building on it, the reasons they build, the token dynamics, the ways you would measure whether it is working all of those shift depending on which version actually lands.
Hua is far enough out that I am holding it loosely. A lot changes between now and then. I just find it worth noting that the earlier roadmap and the later roadmap are, at some level, describing two different products. They may belong to the same vision. But sitting with them separately feels more honest than letting one slide into the other without noticing the gap.
The question I keep coming back to is simpler than any of the architecture. If Midnight's proof layer ends up running quietly underneath applications on Ethereum and Solana, will anyone using those applications ever think of themselves as being on Midnight at all. #night $NIGHT #etherium #solana
Most infrastructure gets described as neutral. The internet does not decide who builds on it. A database does not choose whose data it stores. Neutrality is usually how you know the infrastructure is working. @SignOfficial But neutral infrastructure still produces outcomes. And those outcomes tend to reflect who shows up to use it first, and at what scale.
This is something I keep thinking about with $SIGN . The schema registry is open. Anyone can register a credential format. Anyone can become an attester. The protocol does not gatekeep any of that.
What it also does not do is determine which attesters get recognized by the platforms and applications built on top of it. That decision happens one layer up, inside products and services the protocol does not control.
If the major platforms default to a small set of trusted issuers, the openness at the protocol level does not prevent concentration from forming above it.
Whether open infrastructure and equal access end up meaning the same thing in practice is still an open question. #SignDigitalSovereignInfra
The Credential Moved But I Am Not Sure the Trust Came With It .
I keep thinking about something that happens in everyday life. You gather documents from one place, a university, a bank, a government office, and then spend time convincing a second place that those documents mean what they say they mean. The credential exists. The information is right there in it. But the trust that made it worth issuing does not seem to survive the move.
That is what I keep coming back to when I look at credential portability. @SignOfficial As a technical problem, portability is mostly about coordination. How do you move a verifiable claim from one platform to another without breaking what makes it verifiable? How do you do that across different blockchains, different systems, different organizations? These are real engineering problems and progress has been made on them.
But there is a quieter problem sitting behind the coordination problem. And solving coordination does not touch it.
From what I can see, Sign Protocol is not trying to live on one chain. It runs the same attestation structure across Ethereum, Solana, TON, and several others. The idea is simple enough: where you issue should not limit where you can verify. The same contract structure runs on each chain. There is a shared indexing service that lets you query across all of them. On the surface this looks like portability. Issue on one chain, verify on another.
Looking at how it actually works, the picture gets more specific.
Attestations on Sign Protocol are tied to the chain where they were created. A schema registered on Base lives on Base. When a verifier on a different chain needs to check an attestation, there is no on-chain way to do that directly. What bridges the gap is an off-chain indexing service, basically an API that reads all the supported chains and makes the data searchable. The credential does not move trustlessly across chains. It becomes findable through infrastructure that Sign Protocol runs.
This is worth noting without making too much of it. The indexer works and it is useful. But cross-chain verification right now depends on trusting Sign Protocol's own infrastructure, not on the trustless mechanics the protocol is often described as providing.
The part of the architecture that comes closest to genuine portability is the hybrid storage option. Sign Protocol lets attestation data live on Arweave or IPFS rather than on-chain, with only the core metadata sitting on the blockchain. When the actual data is stored on content-addressed decentralized storage, it becomes accessible from anywhere regardless of which chain the attestation came from. The team calls this lazy verification. You check the data client-side when you need it, without being locked to any particular blockchain. This is the piece that most honestly delivers on what portability promises.
But even here, a harder question comes up. The credential is now technically portable. The cryptographic check works. What has not traveled with it is the reason anyone should care who issued it.
Sign Protocol's own documentation is actually quite honest about this. Their model places the trust layer, meaning societal institutions, relationships, and reputations, at the top of the stack, above the protocol itself. The protocol handles the infrastructure. It does not handle the trust. Their framing says plainly that historically, claims were accepted based on relationships and institutional trust, and that in distributed digital systems those assumptions become fragile. The protocol is a response to that fragility. It is not a solution to the underlying question of what makes an issuer worth trusting.
When a verifier receives a credential issued by an attester on a different chain, they can confirm the cryptography. That the record was not tampered with, that it came from the stated address. What they cannot confirm through the protocol alone is whether that address belongs to an institution whose judgment they should rely on. That question sits outside the system. It requires some shared understanding between the people issuing and the people verifying, a governance structure, a reputation that somehow travels alongside the technical record.
The W3C Verifiable Credentials work covers similar ground and is explicit about this. Trust establishment is deliberately left out of scope. The specification handles cryptographic verification. It leaves verifiers to figure out which issuers they trust for which claims, using whatever framework their context requires. In practice, implementations in Europe's digital identity infrastructure have handled this through trust registries, structured lists of accredited issuers that verifiers can check. The credential is trusted because the issuer is listed. The issuer is listed because a governance body approved them.
Sign Protocol's more recent positioning around national-scale infrastructure moves in a similar direction. It references W3C standards and issuer accreditation. That shift matters. It suggests the team understands that the protocol layer alone is not enough, and that governance frameworks are a necessary part of the stack. The actual specifications for how those layers would work have not been published in any detail yet.
There is one mechanism in Sign Protocol's architecture that approaches this problem from a different angle. Their zkAttestation approach uses something called TLSNotary to embed cryptographic proof of an underlying data source directly into the credential. Instead of a verifier having to trust that a particular institution says a user is verified, the credential itself can carry proof that the verification status was pulled from a genuine web session with that institution. The trust anchor shifts from the reputation of the issuer to a cryptographic proof about a data source. This is the most interesting part of the architecture because it partially sidesteps the issuer credibility problem for a specific type of claim, specifically claims about facts that exist on the web and can be proven through the HTTPS layer.
It does not work for everything. Judgments, assessments, claims that depend on someone having made a considered decision rather than observed a checkable fact, those cannot be reduced to a web proof. For that category, the trust problem stays exactly where it always was.
What becomes clearer after sitting with all of this is that portability and trust are related but genuinely different problems. Portability is a coordination problem and technical solutions can get you most of the way there. The trust problem is a governance problem. It requires someone to decide which issuers are authoritative for which claims, and to publish that in a form that verifiers in other contexts can actually use.
Whether a decentralized attestation protocol can solve that governance problem from the ground up, through reputation, through open schemas, through credentials that carry their own evidence, is something I am genuinely uncertain about. The other possibility is that the protocol ends up as infrastructure sitting beneath existing governance frameworks, the blockchain layer for systems whose trust hierarchies are still decided by institutions and regulators.
It might be worth asking whether, in the credential systems that actually matter most, the trust was ever really in the credential itself, or always in the institution standing behind it. #SignDigitalSovereignInfra $SIGN
Most compliance systems work the same way. Someone collects the data, stores it, and agrees to produce it when required. The privacy is in who gets to ask, not in whether the data exists.
That model creates a quiet assumption in blockchain privacy projects. If a system can prove compliance, something somewhere knows enough to prove it.
Midnight frames regulatory compliance as a feature built into the architecture. $NIGHT sits on a network designed to let applications verify identity or eligibility without exposing the underlying data.
The mechanism it uses is selective disclosure. A zero-knowledge proof can confirm that a condition is met without revealing why. The raw data does not move. Only the proof does.
But a proof had to be generated by something. That something had access, even briefly, to the information being proved.
Where that access lives, who controls it, and under what conditions it can be compelled that is the part the documentation is quieter about. #night @MidnightNetwork
Midnight Calls Cardano a Partner. But Right Now, Cardano Is Holding the Keys.
There is a small thing worth noticing in how Midnight describes its relationship to Cardano. It does not say it runs on Cardano. It says it is a partner chain. That distinction is doing a lot of work, and it is worth sitting with before accepting it at face value. Partner implies something lateral. Two systems cooperating. Each with its own concerns, its own direction, its own set of decisions to make. The word carries a suggestion of independence that the technical reality may not fully support.
Midnight inherits Cardano's security model. That is not a minor detail. It is the foundation the whole network sits on. And what it means in practice, at least in this early period, is that the people responsible for validating activity on Midnight are largely drawn from Cardano's existing pool of stake pool operators. Cardano SPOs are the node operators who run Cardano's proof-of-stake consensus. They have been doing this for years. They understand the infrastructure. They hold the stake. And Midnight, as it stands, is leaning on that existing ecosystem to bootstrap its own security before it has the validator set to stand on its own. The workflow, roughly, is this. Midnight processes transactions and generates proofs for private computation. Those results need to be settled and secured. Rather than building an entirely independent validator network from scratch, Midnight uses Cardano's SPO infrastructure to provide that security layer in the early period. The Cardano stake that already exists backs the integrity of what happens on Midnight. Settlement traces back to a chain that has been running and tested for years. From one angle this looks like a sensible shortcut. Bootstrapping security is one of the hardest problems a new network faces. A fresh chain with no validators and no stake has nothing to protect it. Midnight sidesteps that cold start problem by inheriting an established ecosystem. The security is real. The validators are real. The track record is real. From another angle it raises a question that the documentation does not fully resolve. If your security model depends on another chain's operator set, how much of your decentralization timeline is actually yours to control?
@MidnightNetwork has published a roadmap that puts the transition to a permissionless, open validator set somewhere in Q2-Q3 2026. That transition is the moment when Midnight becomes something that looks more like an independent network. Before that point, the validator set is managed, the SPO participation is structured, and the autonomy is partial. But the transition itself is not purely a Midnight decision. It involves the Cardano SPO ecosystem moving into a new role, or Midnight building out enough of its own validator infrastructure to reduce the dependency. Either path takes time and coordination that extends beyond Midnight's own development team. This matters because partner chain is a relationship as much as it is a technical architecture. Relationships involve dependencies. They involve negotiation. They involve the possibility that the two parties have slightly different incentives at different moments. Cardano SPOs are not running Midnight nodes out of ideological commitment to privacy technology. They are running infrastructure because it makes economic and practical sense for them. That calculus could shift. None of this is a criticism of the design choice. Inheriting security from a mature ecosystem is a legitimate strategy. Ethereum rollups do something structurally similar, relying on Ethereum's base layer security while doing their own computation elsewhere. The dependency is the point, not the problem.
What is less clear is how much that dependency shapes the narrative Midnight is building around itself. A privacy network that presents as independent but is, in its early period, significantly tethered to another chain's operator community is a more complicated thing than the headline suggests. Not wrong. Just more complicated. There is also the question of what Cardano gets from this arrangement, and whether that alignment holds over time. Cardano has been building toward a broader ecosystem of partner chains. Midnight is meant to be one of the more prominent ones. The relationship is mutually beneficial in theory. But theory and practice tend to diverge when timelines slip or priorities shift on either side. The thing that stays with me after reading through the architecture is that Midnight's autonomy exists on a gradient, not as a binary. Right now it is closer to the dependent end of that gradient. The roadmap points toward the independent end. The distance between those two points is not just a technical problem. It is a coordination problem, an economic problem, and in some ways a political problem within the Cardano ecosystem. How quickly Midnight can move along that gradient depends on factors that are not entirely in its control. The readiness of the SPO community. The development of its own validator tooling. The pace of the broader Cardano partner chain rollout. Any one of these could compress or extend the timeline. The word partner is accurate. It is also a gentle way of describing what is, for now, a meaningful dependency. Whether that dependency becomes the kind of relationship where both parties genuinely set their own direction, or whether it remains structurally asymmetric, is something that Q2 2026 will start to answer but probably not finish answering. The question I keep returning to is simpler than all of that. When Midnight describes itself as a partner chain, who exactly is the senior partner? $NIGHT #night
Neutral infrastructure is often presented as a feature. A road doesn't care who drives on it. A protocol doesn't care who issues through it. The neutrality is the point.
But roads built in certain directions still shape where development happens. Neutrality in the infrastructure layer doesn't neutralize the decisions made above it.
This is something worth sitting with when looking at $SIGN . The protocol itself does not decide who becomes an attester. Anyone can issue credentials through the schema registry. The mechanism is open.
What that means in practice depends on which institutions, platforms, or networks actually adopt the attester role. If credential issuance concentrates among a small group of recognized parties, the underlying neutrality of the protocol doesn't prevent unequal access to verified status.
@SignOfficial Sign Protocol doesn't control that outcome. Whether open infrastructure produces open access, or just moves the gatekeeping one layer up, is still an open question. #SignDigitalSovereignInfra