Skip to main content

4 posts tagged with "breaking-change"

View All Tags

Mithril client WASM breaking change

· 2 min read
Mithril Team

Breaking change introduced in the unstable features of the Mithril client WASM

With the release of the new distribution 2437, we have introduced a breaking change to the Mithril client WASM version 0.4.1: the activation of the unstable features is now done with a configuration option of the client instead of using the special .unstable property of the client.

This means that when a new unstable feature is switched to stable, there will be no breaking change in the developers code using the Mithril client WASM, thus providing a seamless transition and a better developer experience.

Here is the code used to activate the unstable features with the client options:

let client = new MithrilClient(aggregator_endpoint, genesis_verification_key, {
// The following option activates the unstable features of the client.
// Unstable features will trigger an error if this option is not set.
unstable: true,
});

The previous client.unstable implementation is not supported anymore and must be replaced with client:

// Before
let mithril_stake_distributions_message =
await client.unstable.compute_mithril_stake_distribution_message(
last_stake_distribution,
);
// After
let mithril_stake_distributions_message =
await client.compute_mithril_stake_distribution_message(
last_stake_distribution,
);

The Mithril client WASM documentation is available here.

For any inquiries or assistance, don't hesitate to reach out to the team on the Discord channel.

Mithril client CLI output breaking change

· 3 min read
Mithril Team

Breaking change introduced in the output of Mithril client CLI

With the release of the new distribution 2408, we have introduced a breaking change to the Mithril client CLI version 0.7.0: all logs are now written to stderr instead of stdout. This allows for cleaner command results that you can directly pipe into tools like jq.

Below, the stdout output of mithril-client -vvv snapshot download latest --json command before the update:

{"timestamp": "2024-02-27T10:41:55.576645+00:00", "step_num": 1, "total_steps": 5, "message": "Checking local disk info…"}
{"timestamp": "2024-02-27T10:41:55.576932+00:00", "step_num": 2, "total_steps": 5, "message": "Fetching the certificate and verifying the certificate chain…"}
{"timestamp": "2024-02-27T10:41:55.847199+00:00", "step_num": 3, "total_steps": 5, "message": "Downloading and unpacking the snapshot…"}
{"timestamp": "2024-02-27T10:41:56.023585+00:00", "bytes_downloaded": 390, "bytes_total": 345223208, "seconds_left": 75797.765, "seconds_elapsed": 0.085}
{"timestamp": "2024-02-27T10:41:56.356820+00:00", "bytes_downloaded": 9487846, "bytes_total": 345223208, "seconds_left": 32.674, "seconds_elapsed": 0.418}
{"timestamp": "2024-02-27T10:41:56.690001+00:00", "bytes_downloaded": 21682030, "bytes_total": 345223208, "seconds_left": 18.209, "seconds_elapsed": 0.752}
{"timestamp": "2024-02-27T10:41:57.023923+00:00", "bytes_downloaded": 33795639, "bytes_total": 345223208, "seconds_left": 14.218, "seconds_elapsed": 1.085}
{"timestamp": "2024-02-27T10:41:57.356999+00:00", "bytes_downloaded": 45934938, "bytes_total": 345223208, "seconds_left": 12.204, "seconds_elapsed": 1.419}
{"timestamp": "2024-02-27T10:41:57.690031+00:00", "bytes_downloaded": 58130472, "bytes_total": 345223208, "seconds_left": 10.894, "seconds_elapsed": 1.752}
{"timestamp": "2024-02-27T10:41:58.023964+00:00", "bytes_downloaded": 70235494, "bytes_total": 345223208, "seconds_left": 9.922, "seconds_elapsed": 2.086}
{"timestamp": "2024-02-27T10:41:58.357817+00:00", "bytes_downloaded": 82456663, "bytes_total": 345223208, "seconds_left": 9.134, "seconds_elapsed": 2.419}
{"timestamp": "2024-02-27T10:41:58.690945+00:00", "bytes_downloaded": 94618128, "bytes_total": 345223208, "seconds_left": 8.463, "seconds_elapsed": 2.753}
{"timestamp": "2024-02-27T10:41:59.024599+00:00", "bytes_downloaded": 106765259, "bytes_total": 345223208, "seconds_left": 7.868, "seconds_elapsed": 3.086}
{"timestamp": "2024-02-27T10:41:59.358139+00:00", "bytes_downloaded": 118941687, "bytes_total": 345223208, "seconds_left": 7.325, "seconds_elapsed": 3.420}
{"timestamp": "2024-02-27T10:41:59.691176+00:00", "bytes_downloaded": 131052374, "bytes_total": 345223208, "seconds_left": 6.824, "seconds_elapsed": 3.753}
{"timestamp": "2024-02-27T10:42:00.025189+00:00", "bytes_downloaded": 143190076, "bytes_total": 345223208, "seconds_left": 6.351, "seconds_elapsed": 4.087}
{"timestamp": "2024-02-27T10:42:00.358735+00:00", "bytes_downloaded": 155448192, "bytes_total": 345223208, "seconds_left": 5.896, "seconds_elapsed": 4.420}
{"timestamp": "2024-02-27T10:42:00.693529+00:00", "bytes_downloaded": 167494850, "bytes_total": 345223208, "seconds_left": 5.466, "seconds_elapsed": 4.755}
{"timestamp": "2024-02-27T10:42:01.026885+00:00", "bytes_downloaded": 179789000, "bytes_total": 345223208, "seconds_left": 5.043, "seconds_elapsed": 5.088}
{"timestamp": "2024-02-27T10:42:01.360483+00:00", "bytes_downloaded": 187536751, "bytes_total": 345223208, "seconds_left": 4.778, "seconds_elapsed": 5.422}
{"timestamp": "2024-02-27T10:42:01.693978+00:00", "bytes_downloaded": 199737576, "bytes_total": 345223208, "seconds_left": 4.389, "seconds_elapsed": 5.756}
{"timestamp": "2024-02-27T10:42:02.027113+00:00", "bytes_downloaded": 211879712, "bytes_total": 345223208, "seconds_left": 4.006, "seconds_elapsed": 6.089}
{"timestamp": "2024-02-27T10:42:02.360139+00:00", "bytes_downloaded": 224033698, "bytes_total": 345223208, "seconds_left": 3.626, "seconds_elapsed": 6.422}
{"timestamp": "2024-02-27T10:42:02.694407+00:00", "bytes_downloaded": 236212871, "bytes_total": 345223208, "seconds_left": 3.249, "seconds_elapsed": 6.756}
{"timestamp": "2024-02-27T10:42:03.027431+00:00", "bytes_downloaded": 248301091, "bytes_total": 345223208, "seconds_left": 2.878, "seconds_elapsed": 7.089}
{"timestamp": "2024-02-27T10:42:03.361001+00:00", "bytes_downloaded": 260428703, "bytes_total": 345223208, "seconds_left": 2.509, "seconds_elapsed": 7.423}
{"timestamp": "2024-02-27T10:42:03.694430+00:00", "bytes_downloaded": 272673595, "bytes_total": 345223208, "seconds_left": 2.139, "seconds_elapsed": 7.756}
{"timestamp": "2024-02-27T10:42:04.028557+00:00", "bytes_downloaded": 284878102, "bytes_total": 345223208, "seconds_left": 1.774, "seconds_elapsed": 8.090}
{"timestamp": "2024-02-27T10:42:04.361686+00:00", "bytes_downloaded": 296835374, "bytes_total": 345223208, "seconds_left": 1.418, "seconds_elapsed": 8.423}
{"timestamp": "2024-02-27T10:42:04.695532+00:00", "bytes_downloaded": 309005759, "bytes_total": 345223208, "seconds_left": 1.058, "seconds_elapsed": 8.757}
{"timestamp": "2024-02-27T10:42:05.028705+00:00", "bytes_downloaded": 321118015, "bytes_total": 345223208, "seconds_left": 0.702, "seconds_elapsed": 9.090}
{"timestamp": "2024-02-27T10:42:05.361956+00:00", "bytes_downloaded": 333264477, "bytes_total": 345223208, "seconds_left": 0.347, "seconds_elapsed": 9.424}
{"timestamp": "2024-02-27T10:42:05.711065+00:00", "step_num": 4, "total_steps": 5, "message": "Computing the snapshot message"}
{"timestamp": "2024-02-27T10:42:12.540752+00:00", "step_num": 5, "total_steps": 5, "message": "Verifying the snapshot signature…"}
{"timestamp": "2024-02-27T10:42:12.540913+00:00", "db_directory": "/mithril-client-0.5.17/db"}

Now the stdout output with Mithril client CLI version 0.7.0:

{"timestamp": "2024-02-27T10:43:06.357962+00:00", "db_directory": "/mithril-client-0.7.0/db"}

In addition, the --log-format-json option that enable JSON output is now written to stderr as well.

For any inquiries or assistance, don't hesitate to reach out to the team on the Discord channel.

Mithril internal stores switch to SQLite.

· 4 min read
Mithril Team

What is that?

Since almost the beginning of the Mithril project, the software used to rely on a store mechanism to save its different states allowing Signers and Aggregators to resume on correct state when switched on and off. This internal store mechanism used to be a bunch of JSON files saved in a given directory. Even though this does the job it still presents flaws: data are hard to query when debugging especially when crossing data (which signers have participated in this multi-signature?). Also, data are stored in different places which can be a problem when moving these files from one place to another. We also had to imagine what would be a migration scenario in case of a structure change. Switching to a file based SQL database solves these issues.

The new release now uses SQLite stores in place of JSON file storage. This means that to continue running a Signer or an Aggregator node it is necessary to migrate from the old storage system to SQLite. This release comes with a tool to perform the migration which should be as straightforward as launching a command line (read below). The migration tool will be available only for a limited time in order to make Mithril beta testers able to migrate their existing data.

How to migrate data from old storage system to SQLite stores?

There are 2 ways of getting the new version and the associated migration tool. Either downloading binaries from GitHub or compiling them yourself.

Downloading

Download the new mithril-signer and mithril-signer-migrate files from the nightly builds page. Make them executable:

$> chmod +x mithril-signer*
$> ls -1F mithril-signer*
mithril-signer*
mithril-signer-migrate*

note: the suffix * appended to the the entries output above indicates the file is executable. If it is not present, ensure the chmod command does not produce any error.

Compiling

If you used to compile your node as stated in the guide, you have to compile the migration tool as well:

$> cd mithril-signer
$> cargo build --all-targets --release
Compiling mithril-signer v0.1.0 (/home/somebody/shared/mithril/mithril-signer)
Finished release [optimized] target(s) in 4.56s
$> ls -1F ../target/release/mithril-signer*
../target/release/mithril-signer*
../target/release/mithril-signer.d
../target/release/mithril-signer-migrate*
../target/release/mithril-signer-migrate.d

Running the migration

The first step is to stop the running Mithril node if any. The mithril-signer-migrate executable can perform the migration automatically once you know where your actual JSON files are located. Have a look in your configuration file (default /opt/mithril/mithril-signer.env), check the value associated with the DATA_STORES_DIRECTORY key (default to /opt/mithril/stores) and copy the path indicated here. Copy this path after the --db-dir option on the following command line:

$> ./mithril-signer-migrate automatic --db-dir /paste/the/data/stores/directory/here
Mithril Aggregator JSON → SQLite migration tool.
Migrating protocol_initializer_store data…
OK ✓
Migrating stake_store data…
OK ✓

At the end of this command, a file signer.sqlite3 (or aggregator.sqlite3 if you run an Aggregator) should be present in the specified base directory.

That should be enough, launch your upgraded mithril node.

Note: The migration executable does not remove the old JSON files from the disk.

Manual migration process

The executable also provides a manual switch for migrating Mithril JSON store directories placed in custom directories. This is mainly intended for developers who work on tweaked environments. Each internal store has its own data structure. In order to correctly migrate and process data, the type of the store has to be given on the command line.

$> ./mithril-signer-migrate manual --help

The command above should give you all informations needed to run a custom store migration.

Feel free to reach out to us on the Discord channel for questions and/or help.

Genesis Certificate support added

· 2 min read
Mithril Team

Update: The PR has been merged and the feature is being deployed on the GCP Mithril Aggregator.

This afternoon, we plan to merge the PR that activates the Genesis Certificate feature on the GCP Mithril Aggregator

PR: Implement Real Genesis Certificate #438

Issue: Bootstrap Certificate Chain w/ Genesis Certificate #364

This will involve some manual operations that will prevent temporarily the service to be running:

  • We will have to reset the stores of the Snapshots and Certificates. This means that the Mithril Explorer will display a No snapshot available message.

  • The Mithril Signers will have to wait until the next epoch #30 to be able to sign. This means that we should see the first available Snapshot 1 hour after the epoch transition.

The SPOs that are currently running a Mithril Signer will have to recompile their node in order ot take advantage of the latest improvements (such as the registration of the nodes that will take few minutes instead of few hours). However, the previously compiled node will be able to contribute to signatures.

In order to restore a Mithril Snapshot, a Mithril Client will now need access to the Genesis Verification Key by adding an environment variable when running: GENESIS_VERIFICATION_KEY=$(wget -q -O - https://raw.githubusercontent.com/input-output-hk/mithril/main/TEST_ONLY_genesis.vkey).

Feel free to reach out to us on the Discord channel for questions and/or help.