- Why is
m_assumeutxo_data
hardcoded within the first place if we do not need to belief different’s UTXO set? (We’re getting pressured to use solely that UTXO set model)
The priority is individuals placing up web sites with directions for “even sooner sync time!” with UTXO set downloads. If such a web site would turn into widespread, after which compromised, there’s a non-negligible probability of this really leading to a malicious UTXO set being loaded and accepted by customers, even when briefly (something is feasible in such a UTXO set, together with the attacker giving themselves 1 million BTC).
By placing the dedication hash within the supply code, it turns into topic to Bitcoin Core’s assessment ecosystem. I feel it is unfair to name this only a “builders determine”, as a result of:
- Energetic assessment neighborhood Anybody can, and many individuals do, look over the adjustments to the supply code. A change to the
m_assumeutxo_data
worth is simple to assessment (simply test an current node’s hash), and will get a variety of scrutiny. - Bitcoin Core has reproducible builds. Anybody, together with non-developers, can take part in constructing releases, and they need to find yourself with bit-for-bit similar binaries as those revealed. This establishes confidence that the binaries which individuals really run match the launched supply code, together with the
m_assumeutxo_data
worth.
For those who consider “builders” as all the group of individuals taking part in these processes, then it is after all not incorrect to state that it is successfully this group making that call. However I feel the size and transparency of the entire thing issues. This is not a single individual selecting a price earlier than a launch, with out oversight, as an instruction on a web site could be. And naturally, the customers is inherently trusting this group of individuals/course of anyway for the validation software program itself, even when we attempt to reduce the extent this belief is required.
- Why is the
m_assumeutxo_data
set to 840.000 and to not the identical block asassumevalid
?
The unique concept, although no person is working proper now on finishing it, behind assumeutxo included computerized snapshotting and distribution of snapshots over the community, in order that customers wouldn’t have to go discover a supply.
In such a mannequin, there can be a predefined schedule of heights at which snapshots can be made. For instance, there may very well be one each 52500 blocks (roughly as soon as per yr), and all nodes supporting the function would make a snapshot at that peak when reached, and preserve the previous couple of snapshots round for obtain over the P2P community. New nodes beginning up, with m_assumeutxo_data
values set to regardless of the final a number of of 52500 was on the time of launch, can then synchronize from any snapshot-providing node on the community, even when the supplier is utilizing older software program than the receiver.
Whereas there isn’t a progress presently on the P2P facet of this, it nonetheless suggests utilizing a snapshot peak schedule that’s not tied to Bitcoin Core releases.
- I perceive that we do not need individuals to begin trusting random UTXO units due to laziness for ready to sync, however could not we use some sort of signed-by-self UTXO units? It could be nice if as a person you may backup the precise UTXO set, signal it ultimately, and have the ability to load+confirm it sooner or later to sync a brand new node.
If it is only for your self, you may make a backup of the chainstate
listing (whereas the node isn’t operating). Assumeutxo has a lot of options that matter within the large distribution mannequin, however do not apply to non-public backups:
- The snapshot information is canonical. Anybody can create a snapshot at a specific peak, and everybody will acquire an similar snapshot file, making it simple to check, and distribute (doubtlessly from a number of sources, bittorrent-style).
- Snapshot loading nonetheless entails background revalidation. It provides you a node that’s instantly synced to the snapshot level, and might proceed validation from that time on, however for safety, the node will nonetheless individually additionally carry out within the background a revalidation of the snapshot itself (from genesis to the snapshot level).
For those who belief the snapshot creator and loader utterly (since you are each of them your self), the overhead of those options is pointless. By making a backup of your chainstate (which holds the UTXO set), you may at any level, on any system, soar to that time in validation. It is a database, so it’s not byte-for-byte comparable between methods, however it’s suitable. The facet “restoring” the backup will not know it is loading one thing created externally, so it will not carry out background re-validation, however for those who in the end belief the info anyway, that is simply duplication of labor.