This test focuses on the storage performance of a privacy chain reconciliation DApp within the Midnight ecosystem, emphasizing the stability, expansion, and backup recovery capabilities of local Leveldb private state storage. Over a testing period of 20 days, the private state folder grew from 1.2GB to 4.8GB, while also revealing areas for optimization in the official backup mechanism, which may impact data security and compliance retention for enterprise-level applications. This article combines the test data to objectively analyze the root causes and scope of the issues, and provides actionable operational suggestions for Midnight ecosystem developers and operations personnel to reference, breaking through defenses against OR.@MidnightNetwork

The DApp tested this time provides on-chain privacy reconciliation services for small and medium-sized institutions, with moderate daily activity but high transaction frequency, developed based on the Midnight Kachina protocol, with all private states stored in local Leveldb. The purpose of the test is to verify the applicability of this storage solution in actual business scenarios, identify potential operational risks, and provide data support for subsequent optimization.
Practical test data shows that after monitoring the node operating status for 20 consecutive days, the private state folder grew from an initial 1.2GB to 4.8GB, with a relatively fast storage growth rate. After investigation, this phenomenon mainly stems from two reasons: one is that the Kachina protocol has not yet implemented an automatic cleaning mechanism for contract witness caching, historical proof intermediate data, and rollback logs, leading to the continuous accumulation of various temporary data; the other is that Leveldb itself has a write amplification characteristic, where data updates use append-style records, requiring old data to wait for the Compaction (merging task) to be completed before it can be cleaned, further exacerbating storage occupation.
In addition, Midnight ensures privacy by serializing and encrypting each value in Leveldb, which effectively enhances data security but also brings a certain amount of additional storage overhead. Practical tests show that the disk usage after encryption is more than three times that of the original data. Specifically, the original state data generated by institutions for daily reconciliation is about 50MB, which increases to 200MB after processing by Leveldb and further rises to 300MB after encryption. Long-term operation requires close attention to storage capacity planning.
The backup and recovery process is one of the key focuses of this practical test. Midnight's official documentation (version: Midnight v1.3.0) clearly states that private state is stored locally by users, effectively avoiding data leakage risks, but practical tests found that the official backup script has room for optimization: the script only backs up encrypted state files and does not synchronize the backup of the key index required for decryption, leading to the issue of 'unable to verify state integrity' during recovery, and data cannot be restored normally.
For remote backup scenarios, the AWS S3 remote backup solution was tested, revealing certain balancing difficulties: if the key is kept locally, the encrypted status file stored in the cloud cannot be decrypted, losing the meaning of backup; if the key is synchronized to the cloud, it contradicts the original design intention of 'private status not leaving local', requiring developers to find a reasonable balance between data security and backup convenience.$NIGHT
Key note: Distinguishing between bugs and design trade-offs. The storage-related issues discovered in this practical test need to clearly distinguish two types of situations: one is optimization points at the engineering implementation level (e.g., backup script omitting key index, lack of automatic cleaning mechanism), which can be gradually improved through subsequent version iterations; the other is trade-offs at the design level (e.g., using local Leveldb storage for privacy protection, storage overhead caused by the encryption layer), which is a reasonable choice in the balance between privacy and performance. The privacy design logic of the Kachina protocol is clear and has technical advantages, with further room for improvement in operational friendliness during engineering implementation.
To comprehensively assess the competitiveness of the Midnight storage solution, comparisons were made with the storage designs of other privacy chains: Leo adopts a record model that supports historical record pruning and can retain only core effective data; Filecoin's FVM has built-in IPFS integration, allowing large volumes of state data to be migrated to distributed storage, reducing local storage pressure. In contrast, the current Midnight has not yet implemented storage partitioning, data archiving, and hot/cold data separation functions, with all data mixed storage, which may increase operational costs and pressure after long-term operation, indicating room for further optimization.
The practical test of the state synchronization process shows that when new devices are connected, the efficiency of synchronizing the private state Merkle proof still has room for improvement. Due to the concurrent optimization design of the Kachina protocol, it is easy to trigger Leveldb lock contention when multiple contracts are synchronized simultaneously. The practical test shows that synchronizing the states of 80 contracts took 42 minutes, during which CPU utilization was less than 10%, with the main bottleneck concentrated on IO waiting. In addition, the synchronization process does not yet support breakpoint resume, requiring re-synchronization after network interruptions; the Leveldb sst files are in binary format and may be locked by processes, making it prone to data inconsistency issues when using rsync for incremental synchronization, which must be avoided through operational strategies.
From the perspective of operational cost estimation, this medium-scale DApp (daily activity 800+, average 12 reconciliations per person) is expected to have an annual storage consumption of up to 450GB, which is relatively high compared to traditional databases, with costs for local hard disk replacement and cloud storage instance leasing. Meanwhile, when the data volume is large, Leveldb's Compaction task triggers write stalls, and it was found in practical tests that during peak reconciliation periods, state write latency increased from 12ms to 2.3 seconds, which may affect business smoothness; this issue can be alleviated through operational strategy optimization.
Core issue summary table (issue-evidence-impact-suggestion)
Based on the practical results, the core storage-related issues discovered this time and the corresponding solutions are as follows: first, Leveldb storage expands rapidly, with practical tests showing that within 20 days, private state data grew from 1.2GB to 4.8GB, and the encryption layer causes storage overhead to exceed three times that of the original data, which will directly lead to increased storage costs and increased pressure on node operations. It is recommended to alleviate this by configuring periodic Compaction tasks for Leveldb, manually cleaning witness caches and historical proof garbage files; second, the official backup script is incomplete, as the script only backs up encrypted status files and does not synchronize the backup of the key index required for decryption, leading to the issue of 'unable to verify state integrity' during recovery. If a hard disk failure occurs, it may result in the loss of core reconciliation data and affect compliance data retention. A customized backup script can be used to synchronize backup status files and key indices, and offline cold storage can be used to store keys to mitigate risks; third, the state synchronization efficiency is low, with synchronizing the states of 80 contracts taking 42 minutes and not yet supporting breakpoint resume. After network fluctuations, re-synchronization is required, which reduces the efficiency of new node access and affects operational efficiency. It is recommended to optimize the contract synchronization order to avoid concurrent lock contention, while manually splitting sst files for batch synchronization; fourth, hot and cold data are not separated, and historical data is mixed with active data, leading to the triggering of write stalls in Leveldb's Compaction tasks. During peak periods, write latency increases, affecting business smoothness. Historical data can be manually migrated to external storage, and continuous follow-up on the official hot/cold data separation plan should be maintained.
Objectively speaking, Midnight's local storage architecture is more suitable for individual users and lightweight application scenarios. If it is to adapt to enterprise-level applications, continuous improvements in storage optimization are still needed. The privacy design of the Kachina protocol has core advantages, but if it lacks functions such as automatic archiving, incremental backup, and hot/cold data separation, it may be difficult to meet the stability and compliance requirements of enterprise-level applications, which can be gradually supplemented through functional iterations.$BTC
Based on this practical test, the current feasible operational optimization solutions include: configuring daily automatic compression tasks for nodes to balance storage usage and performance; deploying dual-machine hot backup for key business to reduce data loss risks; while continuously following up on the state archiving tools and hot/cold data separation plans to be launched in the official Q2 plan to further optimize storage architecture and enhance adaptability.#night
Feasible solution priority (from high to low)
Emergency stop loss: Deploy customized backup scripts to synchronize backup status files and key indices, using a 'local + offline cold storage' dual backup strategy to ensure data recoverability;
Cost optimization: Configure Leveldb to automatically compact during low peak hours every day, regularly clean up invalid cache files, and control the storage growth rate;
Experience improvement: Optimize contract synchronization strategies to avoid lock contention caused by concurrent synchronization of multiple contracts, improving synchronization efficiency;
Long-term planning: Follow up on official version iterations, assess the feasibility of integrating distributed storage, data sharding, and other functions to adapt to enterprise-level application needs.
In summary, this practical test comprehensively reviewed the local storage performance of the Midnight ecosystem DApp based on Leveldb, clarifying four core issues: storage expansion, incomplete backups, insufficient synchronization efficiency, and the lack of hot/cold data separation. It also verified the technical advantages of the Kachina protocol's privacy design and the optimization space for engineering implementation. The operational suggestions mentioned in the text are all derived from practical test data and can be directly implemented to help developers and operational personnel avoid storage risks and control operational costs. In the future, it can be combined with the progress of official version iterations to continuously optimize the storage architecture and promote the Midnight storage solution to better adapt to application scenarios of different scales, providing more stable storage support for long-term ecological development.

