Introduction
Imagine a Bitcoin wallet controlled by a company with five board members. You want a rule: any three of the five can authorize a transaction, but no one or two members can steal the funds. And when the transaction hits the blockchain, it should look exactly like any other — nobody can tell it was signed by a group.
This is exactly what FROST does. FROST stands for Flexible Round-Optimized Schnorr Threshold Signatures. It lets out of participants jointly produce a Schnorr signature — without any single party ever knowing the full secret key. The output is a standard Schnorr signature, indistinguishable from one created by a single signer.
This is directly relevant to Bitcoin. Since the Taproot upgrade (2021), Bitcoin natively supports Schnorr signatures (BIP-340) for transaction authorization. FROST produces standard Schnorr signatures on the secp256k1 curve — exactly what Bitcoin expects. This makes FROST a natural fit for threshold signing of Bitcoin transactions: custody solutions, cross-chain bridges, or any setup where multiple parties must approve a spend, without revealing on-chain that a threshold scheme was used at all.
Why not just use a regular private key? Because a single key is a single point of failure. Stolen key — all funds gone. Lost key — all funds locked forever. FROST distributes trust: no single participant can sign alone, and any of the participants can cooperate to sign. The rest can be offline, unreachable, or even compromised — as long as fewer than are malicious, the key is safe.
To understand how FROST achieves this, we'll build up from its foundations, one step at a time:
- Schnorr signatures — the signature scheme FROST is built on
- Nonce aggregation — how multiple signers combine their randomness into one
- Partial signature aggregation — how individual contributions sum to a valid signature
- Shamir secret sharing — how to split a secret so that any of pieces can reconstruct it
- Lagrange interpolation — the mathematical tool that makes reconstruction work
- The FROST protocol — putting it all together
Throughout the article, we'll use concrete numeric examples you can verify by hand. In real cryptography, all numbers are 256-bit integers and the math happens on elliptic curves, but the logic is identical.
Schnorr Signatures
Before we can distribute a signature across multiple parties, we need to understand how a single-signer Schnorr signature works.
The setup: elliptic curve math in 30 seconds
All the math happens in a cyclic group — think of it as a set of points on an elliptic curve. The group has a generator point and a prime order (the total number of points). We have one operation: scalar multiplication — multiplying by an integer to get a new point .
The critical property: given and , computing is trivial (milliseconds on a laptop). But given and , recovering is computationally infeasible — this is the discrete logarithm problem, and it's the foundation of elliptic curve cryptography.
Think of it like mixing paint: combining yellow and blue to get green is easy, but looking at the green and figuring out the exact ratio of yellow and blue is practically impossible.
With that in mind, a signer has:
- Secret key: a random number (a "scalar" — just a big integer modulo )
- Public key: (a point on the curve — easy to compute from , impossible to reverse)
How signing works
To sign a message , the signer:
-
Picks a random nonce and computes the commitment . The nonce is a one-time secret random number. "Commitment" because publicly commits the signer to their choice of without revealing itself (discrete log again).
-
Computes the challenge . This hashes together the commitment, the public key, and the message. The hash acts as an unpredictable "question" that the signer must answer — and crucially, it depends on , so the signer can't know before committing to their nonce. We'll see why this matters right after verification.
-
Computes the response . This is the core equation — it blends the nonce with the secret key, weighted by the challenge. It's the "answer" to the challenge that proves the signer knows .
-
Outputs the signature — the commitment point and the response scalar.
How verification works
Anyone with the public key can verify the signature by checking:
This reads: "multiply the generator by the response , and check if it equals the commitment plus the challenge times the public key."
Why does this equation hold? Let's expand the right side by substituting what and actually are:
The equation works because we defined . A verifier who doesn't know or can still check this — they only need the public values , , , and .
Why must be in the challenge hash
Now that we've seen how verification works, we can understand why the challenge must include . Without it, the scheme is completely broken — anyone can forge signatures without knowing the secret key.
Suppose the challenge were just — no . An attacker wants to forge a signature on message under public key :
- Compute — this is public knowledge, no secrets needed.
- Pick any random .
- Compute .
- Output as the "signature."
Does it verify? . Yes — a complete forgery without knowing the secret key.
The attack works because the forger gets to choose and simultaneously, engineering them to satisfy the verification equation. By including in the hash, we force the signer to commit to before the challenge is determined. A forger would need to find such that and hold simultaneously — but they can't know until they fix , and they can't construct without knowing . This circular dependency is what makes forgery infeasible.
This construction is called the Fiat-Shamir transform (Fiat & Shamir, CRYPTO 1986): it converts an interactive identification protocol into a non-interactive signature scheme by replacing the verifier's random challenge with a hash of the commitment. Pointcheval & Stern (EUROCRYPT 1996) proved that this construction is existentially unforgeable in the random oracle model.
Why must be in the challenge hash
Including prevents forgery, but why is the public key also in the hash? Without it, signatures aren't bound to a specific key — an attacker can take someone else's valid signature and claim it was signed by a different key.
Suppose the challenge were — no . Given a valid signature for message under public key , an attacker can find a different public key for which the same signature is also valid:
- Start with the verification equation: .
- Rearrange: .
- The pair now verifies under too, because .
The attacker doesn't need to know the secret key for — they just need to find a point that makes the equation balance. This is called a key substitution attack (or "duplicate signature key selection").
Why does this matter in practice? Consider a multi-party protocol where you need to prove that a specific participant signed a message. Without in the hash, an adversary could take a legitimate signature from Alice and reuse it to frame Bob — or vice versa. In threshold signing like FROST, this is especially dangerous: the protocol relies on verifying that each participant produced their own valid signature share under their own verification key .
Including in the hash ties the challenge (and therefore the entire signature) to a specific public key. The same pair produces a different for a different , so it won't verify under any other key.
This is analyzed in the multi-user security setting by Menezes & Smart (2004). Ed25519 and BIP-340 (Bitcoin's Schnorr scheme) both include in the hash for this reason.
Why the nonce is the most critical detail
Look at the response equation again: . Both (published in the signature) and (computed from public data) are known to everyone. The only thing stopping an attacker from solving for the secret key is the nonce :
If the nonce leaks, the secret key is immediately exposed. If the same nonce is reused for two different messages (producing two different challenges and responses ), an attacker can set up two equations and solve for :
Subtract: , so . Game over. This is why nonces must be truly random, used exactly once, and immediately deleted. This matters even more in FROST, where multiple parties each contribute nonces.
A concrete example with small numbers
In real systems these are 256-bit numbers, but the math is identical with small ones. Let's work modulo .
- Secret key:
- Public key:
- Nonce: , so commitment:
- Suppose the hash gives us:
- Response:
Verification: Is ?
- Left side:
- Right side:
Both sides equal . The signature is valid.
Aggregating nonce R
Now let's move toward multi-party signing. Suppose two signers want to jointly produce a signature. Each picks their own random nonce: signer 1 picks and publishes , signer 2 picks and publishes .
The simplest idea: just add them together.
The group's nonce is and the group's commitment is . This is called additive nonce aggregation, and it almost works — except for a devastating attack.
Why naive aggregation is dangerous
Suppose signer 2 is malicious. The honest signer 1 publishes first. The attacker waits, sees , and then computes:
for some they chose in advance. Now the aggregate is — the attacker has complete control over the group commitment.
Why is this bad? The challenge depends on . If the attacker controls , they can precompute the challenge and craft their contribution to forge a signature. This is called a nonce manipulation attack.
The root cause: the attacker gets to see the honest party's commitment before choosing their own. They can "cancel out" the honest contribution and replace it with whatever they want.
The fix: binding factors
FROST solves this by adding a twist. Instead of each signer committing to one nonce value, they commit to two values, and these get combined using a binding factor — a hash-derived number that depends on everyone's commitments.
Each participant publishes two commitments:
- (the "hiding" commitment)
- (the "binding" commitment)
After everyone has published, the binding factor for each participant is computed as:
The group commitment is then:
The key insight: is a hash of all commitments, so it can only be computed after everyone has already published . If any signer changes their commitment, every binding factor changes, invalidating any precomputed attack. The attacker can no longer manipulate because they'd need to predict the hash output before it's computable.
Why two commitments instead of one?
If each signer only published one commitment and we mixed it with a binding factor as , the attacker could still manipulate things — they have one unknown () but also one equation to satisfy. Two independent commitments combined as give the binding factor something to "entangle" with. The attacker would need to predict before choosing , but depends on — a chicken-and-egg problem that defeats the attack.
Why does FROST need per-participant binding factors?
In MuSig2 (an -of- multisignature scheme where all signers must participate), a single binding factor — the same for everyone — is sufficient. That's because the signing set is fixed: the adversary can't choose which honest parties to include.
FROST is a threshold scheme (-of-): only signers are needed. An adversary controlling parties can start multiple parallel signing sessions, each time pairing their corrupt parties with a different honest signer. If the binding factor were the same for all participants, the adversary could correlate honest signers' nonces across sessions and forge signatures (this is called Wagner's generalized birthday attack).
FROST defeats this by including the participant's identifier in the binding factor hash. Different participants get different values, so the adversary can't correlate nonce contributions across sessions.
Aggregating partial signatures
Now we know how to safely combine nonces. The next question: how do we combine each signer's contribution into a single valid Schnorr signature?
Each signer produces a signature share (also called a partial signature) — a number computed from their own secrets and nonces. The design ensures that when you add up all the shares, you get a valid Schnorr response.
The signature share equation
Recall that a standard Schnorr response is . In FROST, each participant computes their share as:
Let's break down each term:
- — this is 's contribution to the aggregate nonce. It matches their commitments that went into computing .
- — the Lagrange coefficient, a weight that depends on which set of signers is participating. We'll explain this in detail in the next sections. For now, just know that these weights ensure the secret shares add up correctly.
- — participant 's secret share (their piece of the group secret key).
- — the challenge, same as in regular Schnorr.
Why the sum of shares equals a valid signature
When we add up all signature shares:
Two things happen:
-
The nonce parts sum to — the discrete log of the group commitment . This is by construction: was defined as the sum of , so the corresponding scalars sum to .
-
The weighted secret shares sum to — the group secret key. This is where Lagrange interpolation (covered below) does its magic: the weights are chosen so that .
Together: , which is exactly a valid Schnorr response. The verifier checks and it passes — nobody needs to know that was computed by multiple parties.
Checking individual shares (identifiable abort)
Before adding the shares up, the Coordinator can verify each one individually:
where is 's public verification share — a publicly known point that corresponds to their secret share. This check uses only public values (no secrets revealed). If a participant sends a bad share, the Coordinator can identify them and abort. This is why FROST has identifiable abort — you can always tell who cheated.
A concrete example
Let's trace through partial signature aggregation with numbers. We'll use the Shamir shares from the next section (feel free to jump ahead and come back):
- , , signing set
- Group secret: , challenge:
- Secret shares: ,
- Lagrange coefficients for : ,
- Nonces: , , ,
- Binding factors: ,
Each participant computes their share by plugging into :
The Coordinator sums them:
Let's verify this equals . The aggregate nonce is . The reconstructed secret is . So:
It works. The sum of partial signatures is a valid Schnorr response.
Shamir Secret Sharing
We've been using "secret shares" and "Lagrange coefficients" without fully explaining where they come from. Let's fix that.
The core problem: we need to split the secret key among participants so that:
- Any participants can reconstruct (by combining their shares)
- Any participants learn absolutely nothing about
This is Shamir's Secret Sharing (1979), and it's built on a beautifully simple idea from algebra.
The key insight: polynomials and points
A straight line () is uniquely determined by 2 points. A parabola () is uniquely determined by 3 points. In general, a polynomial of degree is uniquely determined by points.
Shamir's trick: to create a sharing scheme, embed the secret as the constant term of a random polynomial of degree . Then distribute points on that polynomial. Any points determine the polynomial (and thus the secret), but points leave the secret completely undetermined.
How it works
The dealer (the entity splitting the secret) picks a random polynomial of degree :
The constant term is the secret: . The other coefficients are random. Each participant receives the share — the polynomial evaluated at their index.
Concrete example
Let's set up a scheme — any 2 of 3 participants can reconstruct, 1 participant alone learns nothing. Our secret is .
Since , we need a degree-1 polynomial (a line): (the coefficient is chosen randomly).
Compute the shares by evaluating at each participant's index:
- gets
- gets
- gets
Why is this secure? The shares are points on a line. Two points determine a line, so any 2 shares let you find and compute . But a single point, say , tells you nothing: for any possible secret , there's a line through with -intercept . One share is consistent with every possible secret.
The dealer problem: who picks the polynomial?
In the basic scheme, a single dealer knows the secret, creates the polynomial, and distributes shares. But in a decentralized system (like a blockchain bridge), there shouldn't be anyone who ever knows the full secret. If the dealer is compromised during setup, the whole scheme fails.
The solution is Distributed Key Generation (DKG) — the participants collectively generate a shared secret where nobody, not even during setup, ever knows the full key.
Pedersen DKG: generating a shared secret with no trusted dealer
The idea is elegant: instead of one dealer picking one polynomial, every participant picks their own random polynomial and shares pieces of it with everyone else. The group secret emerges as the sum of everyone's individual secrets, but no one ever computes that sum.
Here's how it works. Each of the participants independently:
-
Picks a random polynomial of degree . The constant term is 's private contribution to the group secret.
-
Publishes commitments to their polynomial's coefficients: for each coefficient . These are elliptic curve points — they "commit" to their polynomial without revealing the actual coefficients (the discrete log problem protects them).
-
Privately sends to each participant . This is the value of 's polynomial evaluated at 's index.
-
Publishes a proof that they know their secret (a Schnorr proof of knowledge). This prevents a "rogue-key attack" where a malicious participant claims a carefully crafted public key to manipulate the group key.
After everyone has done this, each participant adds up all the values they received:
This is 's combined secret share. It's a sum of contributions from every participant.
The group secret (which nobody ever computes explicitly!) is:
The group public key is computable from the published commitments:
Why does this work as Shamir sharing? The combined shares are points on a "virtual" polynomial . This polynomial has degree (sum of degree- polynomials) and . So the combined shares are exactly Shamir shares of the group secret — even though nobody ever chose deliberately.
Concrete DKG example
, . Each participant independently picks a degree-1 (line) polynomial:
- picks: (their secret contribution is )
- picks: (their secret contribution is )
- picks: (their secret contribution is )
Now each participant evaluates their polynomial at everyone's index and sends the result privately:
| Sender evaluates at → | P₁ (p=1) | P₂ (p=2) | P₃ (p=3) |
|---|---|---|---|
| P₁: f₁(p) | 7 + 3 = 10 | 7 + 6 = 13 | 7 + 9 = 16 |
| P₂: f₂(p) | 5 + 2 = 7 | 5 + 4 = 9 | 5 + 6 = 11 |
| P₃: f₃(p) | 4 + 1 = 5 | 4 + 2 = 6 | 4 + 3 = 7 |
Each participant sums the column they received:
- :
- :
- :
The virtual polynomial is:
The group secret is: — but nobody computed this! only knows (their own contribution), only knows , only knows .
The group public key: — anyone can compute this from the published commitments.
Feldman VSS: verifying that shares are correct
There's a trust problem: when sends to , how does know the value is correct? could send garbage or strategically wrong values. We need a way to verify shares without revealing the polynomial.
This is where the published commitments become essential. Participant can check:
Left side: multiply the generator by the share value (a single scalar-point multiplication).
Right side: evaluate the "committed polynomial" at using the public commitments. This works because:
Both sides compute by different paths. The left side uses the secret share value. The right side uses only public commitments. If they match, the share is consistent with the published commitments. If they don't, cheated and can be identified.
The beauty: the verification happens entirely "in the exponent" (on elliptic curve points). Nobody learns the polynomial coefficients — they only verify consistency.
Lagrange Interpolation
We now have participants, each holding a share — a point on a secret polynomial of degree . We need to recover (the group secret) without anyone actually reconstructing the polynomial.
Lagrange interpolation gives us a formula that computes directly from points, as a weighted sum of the share values.
Building the intuition
Suppose you have 2 points on a line and want to find where it crosses the -axis (i.e., the value at ). You could find the slope, write the line equation, and plug in . Lagrange interpolation is a general formula that does this for polynomials of any degree — and it gives the answer as a weighted sum of the known -values.
The formula
Given a signing set of participants, each with share , the secret is:
where the Lagrange coefficient for participant is:
This formula looks intimidating, but it's just a product of fractions involving the participant indices. Let's see it in action.
Where does this formula come from?
The Lagrange interpolation formula for a polynomial passing through points for is:
Each is called a Lagrange basis polynomial. It has a key property: and for every other . So each basis polynomial "selects" exactly one data point.
We only need , so we plug in :
The last step is the sign flip: .
Concrete examples
Using our shares: , , . The secret is .
With participants :
With participants :
With participants :
Every pair recovers the same secret. The Lagrange coefficients are different for different sets of participants, but the weighted sum always yields . This is the mathematical guarantee of Shamir's scheme.
The critical property for FROST
Notice that Lagrange interpolation is a linear operation — the secret is a weighted sum of the shares:
This linearity is what makes threshold Schnorr signatures possible. Each participant multiplies their share by and contributes that to the signature. Nobody reconstructs — they each contribute their weighted piece, and the sum magically equals .
In curve-point terms:
The Lagrange recombination works both in the scalar world (secret numbers) and in the point world (public keys). This "homomorphic" property — the ability to do the computation "in the exponent" — is the mathematical backbone of FROST.
FROST Protocol
We've built all the pieces. Now let's put them together into the actual FROST protocol: a two-round scheme where of participants cooperatively produce a standard Schnorr signature.
Setup: Distributed Key Generation (done once)
Before any signing can happen, the participants run the Pedersen DKG described above. Afterward, each participant holds:
- Their secret share (private — never revealed)
- The group public key (public — this is the address that will verify signatures)
- Public verification shares for all participants (public — used to verify individual signature shares)
Roles
- Participants : hold secret shares, produce signature shares. At least must be online for any given signing session.
- Coordinator (also called Signature Aggregator): orchestrates the protocol, collects commitments and shares, produces the final signature. The Coordinator is semi-trusted — they cannot learn the secret key or forge signatures (even a corrupt Coordinator can't sign without honest participants). However, a corrupt Coordinator can refuse to produce the signature (denial of service) or falsely blame a participant for sending a bad share.
Round 1: Commitment
Each participant in the signing set (where ):
-
Generates two random nonces: and (random scalars). These are generated as — mixing randomness with the secret key as a hedge against a faulty random number generator.
-
Computes commitments: and .
-
Sends to the Coordinator.
After Round 1, the Coordinator has everyone's commitments, but nobody knows anyone else's nonce values (, ), only the corresponding curve points.
Critical safety rule: each pair must be used in exactly one signing session. If a signing session is aborted, the nonces must be discarded and fresh ones generated for any retry. Reusing nonces across sessions can leak the secret key.
Round 2: Signature Share
The Coordinator broadcasts the message to sign and the full list of all commitments for every .
Each participant then independently computes:
Step 1. Binding factors. Compute the binding factor for each participant:
The subscripts denote domain-separated hash functions — they use different prefixes so that a hash computed for one purpose can't be confused with a hash for a different purpose. This is a standard defensive practice in cryptographic protocols.
Every participant computes the same values because they all have the same inputs (all commitments were broadcast).
Step 2. Group commitment. Compute the aggregate nonce point:
Again, every participant gets the same .
Step 3. Challenge. Compute the Schnorr challenge:
Step 4. Lagrange coefficient. Compute for participant given the signing set .
Step 5. Signature share. Compute and send:
This is the moment where the participant uses their secret share . After computing , they delete and immediately.
Step 6. Send to the Coordinator.
Aggregation
The Coordinator now has all signature shares . They:
- (Optional but recommended) Verify each share:
If any share fails, the Coordinator identifies the misbehaving participant and aborts.
- Aggregate:
- Output the signature:
This signature passes standard Schnorr verification: .
Why it's correct
Let's trace through the math one more time:
The nonce terms sum to (the discrete log of ):
The secret share terms recover by Lagrange interpolation:
Therefore , and . Valid Schnorr.
Complete worked example
Let's trace the entire FROST protocol end-to-end with concrete numbers. We'll work modulo to keep the arithmetic manageable.
DKG (already done). Virtual polynomial .
- Secret shares: , ,
- Group secret: (nobody knows this)
- Group public key:
Signing set: (participants and will sign)
Lagrange coefficients for :
(In modular arithmetic, because .)
Round 1 — Each participant picks nonces and publishes commitments:
- : , . Publishes , .
- : , . Publishes , .
Round 2 — Compute everything and sign:
Binding factors (from hash — we'll suppose): , .
Group commitment:
So , meaning the aggregate nonce is .
Challenge (from hash — we'll suppose): .
Signature shares:
:
, so .
:
, so .
Aggregation:
Verification — does this match a standard Schnorr signature?
The final signature is — a perfectly valid Schnorr signature under the group public key . A verifier sees a normal-looking single-signer Schnorr signature. They have no way to know it was produced by two participants collaborating through FROST.
Summary of FROST properties
FROST produces a standard Schnorr signature — the output is indistinguishable from one created by a single signer. The protocol completes in 2 rounds (or just 1 if nonces are preprocessed). Any of participants can sign; the rest can be offline. Multiple signing sessions can run concurrently without compromising security.
If a participant misbehaves, the protocol provides identifiable abort — the cheater is detected, though a new session must be started without them. FROST is EUF-CMA secure under the One-More Discrete Log assumption: an adversary controlling up to participants cannot forge signatures.
The protocol is standardized in RFC 9591 (June 2024), with ciphersuites for Ed25519, Ed448, P-256, and secp256k1.
References
- FROST paper: Komlo, Goldberg. "FROST: Flexible Round-Optimized Schnorr Threshold Signatures." SAC 2020. ePrint 2020/852
- RFC 9591: "Two-Round Threshold Schnorr Signatures with FROST." June 2024. rfc-editor.org
- BIP-340: Wuille, Nick, Towns. "Schnorr Signatures for secp256k1." github.com/bitcoin/bips
- MuSig2: Nick, Ruffing, Seurin. "MuSig2: Simple Two-Round Schnorr Multi-Signatures." CRYPTO 2021. ePrint 2020/1261
- Shamir's Secret Sharing: Shamir. "How to Share a Secret." Communications of the ACM, 1979. doi:10.1145/359168.359176
- Fiat-Shamir transform: Fiat, Shamir. "How to Prove Yourself: Practical Solutions to Identification and Signature Problems." CRYPTO 1986. Springer
- Schnorr signature security proof: Pointcheval, Stern. "Security Arguments for Digital Signatures and Blind Signatures." Journal of Cryptology, 2000 (originally EUROCRYPT 1996). Springer
- Multi-user signature security: Menezes, Smart. "Security of Signature Schemes in a Multi-User Setting." Designs, Codes and Cryptography, 2004. Springer
- Ed25519: Bernstein, Duif, Lange, Schwabe, Yang. "High-speed high-security signatures." 2012. ed25519.cr.yp.to
- Feldman VSS: Feldman. "A Practical Scheme for Non-interactive Verifiable Secret Sharing." FOCS 1987. IEEE
- Pedersen DKG: Pedersen. "Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing." CRYPTO 1991. Springer
- Zcash FROST implementation (Rust): github.com/ZcashFoundation/frost
- FROST Interactive Explainer: romarq.github.io/FROST-Interactive-Explainer