Victims of crypto hacks often find themselves victimized again by unscrupulous recovery firms, says Harry Donnelly, CEO of Circuit. Crypto adoption is rising, and an increasing number of people are joining. However, despite years of innovation, crypto is still failing…Victims of crypto hacks often find themselves victimized again by unscrupulous recovery firms, says Harry Donnelly, CEO of Circuit. Crypto adoption is rising, and an increasing number of people are joining. However, despite years of innovation, crypto is still failing…

Interview | Crypto recovery is a myth, prevention is key: Circuit

Victims of crypto hacks often find themselves victimized again by unscrupulous recovery firms, says Harry Donnelly, CEO of Circuit.

Summary
  • Most crypto recovery efforts after a hack are futile, says Circuit CEO
  • 95% of recovery firms could be predatory and offer no support
  • Prevention is key, as $3B was already lost to hacks this year

Crypto adoption is rising, and an increasing number of people are joining. However, despite years of innovation, crypto is still failing some of its most vulnerable users. In a recent incident, a U.S. retiree lost $3 million in XRP after unknowingly compromising their cold wallet.

The incident shows that security is still the top issue in crypto. For this reason, crypto.news spoke to Harry Donnelly, CEO of the crypto security firm Circuit. He explained why the ecosystem lost over $3 billion to hacks this year alone, and why recovery is usually very difficult.

Crypto.news: We’ve seen a recent security incident where a wallet holder lost their life savings in a hack. What does this tell us about crypto asset security?

Harry Donnelly: This is the XRP wallet incident: an alleged U.S. retiree lost about $3 million in XRP, their retirement savings. ZacXBT posted about it on Twitter. The victim said they tried to file a police report but couldn’t reach law enforcement. The funds were then laundered across roughly 120 transactions.

We don’t have full confirmation of the exact vector because the victim isn’t crypto-savvy; without access to their laptop to trace the steps, it’s hard to be certain. But cases like this often involve malware that scans a device for seed phrases and other secrets.

In this case, the person thought they had a cold wallet — purchased from Ellipal — but they imported the seed phrase onto their laptop. That defeats cold storage: once the seed phrase exists on an internet-connected machine, the hardware wallet’s protection is effectively gone.

CN: ZacXBT said many recovery firms are questionable. What is your view?

HD: Totally fair. When people are desperate, bad actors will prey on them. The worst actors often SEO-optimize their pages so they appear first when someone frantically searches “recover stolen crypto.”

Legitimate recovery is hard. Crypto is a bearer asset: possession of the key equals ownership. You can’t call a bank and reverse an on-chain transfer. Legit recovery firms are typically legal shops that work with law enforcement, use blockchain forensics tools like Chainalysis or TRM Labs, track the funds, and try to get exchanges to freeze accounts with legal notices.

But that only works if funds hit a KYC exchange willing and able to cooperate and if the jurisdiction is cooperative. Attackers often route funds to non-cooperative exchanges or mixing services; last year, under 5% of assets were recovered with those methods.

Predatory firms will charge something like $10,00 large fees for basic scans and produce a report that gives victims false information. For example, they tell them to email Tornado Cash, which is useless.

CN: So it seems like recovery is a long shot. What’s the alternative?

HD: Because recovery probabilities are low, prevention is critical. Circuit focuses on preventing loss rather than relying on post-hack recovery. Once funds leave a wallet, chances of recovery are slim; stopping theft before it happens has a much higher success probability.

There are two loss modes: (1) you lose access to your private key (funds are inaccessible) or (2) someone else obtains your private key (funds are stolen). Circuit addresses both by protecting the assets directly rather than solely protecting the key.

We build what we call automatic asset extraction. Instead of only safeguarding a private key, we pre-create signed transactions that move funds to a predefined backup wallet. Those transactions are created ahead of time, encrypted, and stored — never broadcast unless the legitimate user triggers them.

CN: So, who controls that big red button?

HD: The user controls it. They go into our web app, verify their identity using 2FA, and press the button. That decrypts and broadcasts the transaction, and the funds move to the backup wallet.

We store the pre-signed transaction, encrypted, but the user is the only one who can decrypt and trigger it. They define the destination address in advance, and we cannot change that address. Once it’s signed, it’s locked. Our system simply holds it securely and allows the user to trigger it when needed.

CN: Who uses this service at the moment?

HD: Right now, it’s all institutions and enterprises. We don’t serve retail users yet. Our partners are exchanges, asset managers, OTC desks. These are people managing large sums and client assets. For them, downtime or loss of access can be catastrophic.

One example is Shift Markets. We’re deploying our technology across 150 exchanges that they work with. These exchanges can’t afford to lose access to funds, even for a few hours.

For institutions, it’s not just about preventing theft. Sometimes someone misplaces a signing device, or a service like Fireblocks goes down. That can halt all operations — no deposits, no withdrawals.

With Circuit, they can recover within minutes instead of being down for days. And for them, that can mean saving their reputation — and millions in customer retention.

CN: And how do users choose their backup wallets? Should it be another hardware wallet, an exchange account, or a custodian?

HD: Great question. We recommend that the backup wallet be just as secure as the primary. So that means using different wallet providers, storing keys in different locations, and making sure the infrastructure isn’t co-located. You don’t want both sets of keys in the same vault or server.

Also, we enforce quorum approvals — 4-eyes or 6-eyes policies — to avoid any single point of failure. Most large institutions already operate this way. Some use different MPC or multisig setups for primary and backup wallets. Others use different secure facilities or even different jurisdictions. The idea is: if disaster hits one system, the other is unaffected.

We also work with major insurance companies, and they recognize this as a risk reducer. A lot of crypto insurance claims are for lost access or stolen funds. By adding Circuit’s technology, firms become a lower risk. So insurance providers offer discounts to clients who use us. That makes insurance more accessible and, in turn, brings more institutional capital into crypto.

CN: Have you actually had any cases where someone had to use the red button?

HD: Yes, we’ve used the red button, both in real cases and in controlled tests. We’ve even intentionally given access to attackers in white-hat or simulation environments to try and steal the funds. Every time, it’s held up. Our engineering team has worked hard to make sure we’ve covered edge cases and real-world threats.

We’re working with some of the biggest players in the space who’ve tested it independently. We’ll have a public announcement in the next month or two showcasing some of those validations.

CN: And for institutions, the typical failure scenario?

HD: It depends on their wallet setup. If they’re using non-custodial services like Fireblocks, the institution bears some responsibility — they must be able to access their wallets even if Fireblocks is down or unavailable.

If they’re using fully custodial solutions like Coinbase or Anchorage, those providers manage everything end-to-end. But with Fireblocks, you still need your own secure access to the key shards or signing devices.

So imagine an exchange relying on Fireblocks, and they lose a device — maybe someone’s phone or YubiKey. That can temporarily lock them out, halting withdrawals and deposits.

CN: You mentioned earlier that attackers are getting more sophisticated. What’s your perspective on how the crypto industry is adapting to that? What’s changing in security?

HD: It’s similar to Web2 cybersecurity; it’s a cat-and-mouse game. New attacks emerge, we build defenses, attackers evolve again, and so on. Early on, the big breakthrough was multisig, requiring multiple keys to approve transactions.

Then came MPC wallets (multi-party computation), which improve on multisig. In a multisig setup, compromising two out of three keys gives you partial info about the third. In MPC, that’s not the case as each shard gives you no info about the whole, making it more resilient.

Companies like Fireblocks have had a lot of success with MPC. Then on top of that came policy engines — rules that block transactions under certain conditions. For example: “block all transfers over $1 million,” or “don’t allow transfers to non-whitelisted addresses.”

Then came detection tools, which are services that monitor chain activity and flag suspicious behavior. But today, most of those still require a human to act on the alert. In some setups, you might need approvals from people in the U.S., Europe, and Asia, which could take hours. Meanwhile, attacks are happening in minutes or even seconds.

We saw this in the SwissBorg/Kiln hack: $41 million gone in three minutes. Humans simply don’t respond that fast.

CN: When centralized exchanges freeze stolen funds, people usually understand. But when DeFi protocols freeze wallets or pause smart contracts, there’s often criticism about centralization. What’s your view on that?

HD: Look, ultimately, I think if you can prevent tens or hundreds of millions of dollars being stolen, and what it takes is to shut down a smart contract for a few hours, then I think you should do that.

I know there are very big proponents of decentralization, but decentralization is not going to take hold if people don’t adopt it. And people are not going to adopt it if they’re going to lose all their funds. At the end of the day, I think it’s as simple as that.

If you truly believe in this and want it to be adopted by the mainstream — by actual enterprises, actual institutions — they’re going to have to have confidence in it. And for all the proponents who say “just let it be hacked,” or “code is law,” I think the issue is that it’s going to fundamentally stop the growth of the space as much as we’d like it to grow.

And I think there are two areas you’re going to see. You’ll have pools and protocols that are just going to keep doing things the way they are — just letting things run. And then you’re going to have more institutionally focused and enterprise-focused infrastructure, where they do have safeguards, where they do have failsafes, and where there is insurance built into the pools.

That’s already happening. And it’s in those pools that you’re going to see a lot more liquidity being deposited, because that’s where the real capital — the institutions — feel confident putting their funds. And when you think about what the biggest network effect in DeFi is, a lot of it comes down to liquidity.

So if you look at where a lot of liquidity is going to go, over time it should shift toward the places that have failsafes and checks in place — because it gives people more confidence.

CN: But someone might say, if a protocol has the ability to freeze wallets or pause smart contracts, don’t they also have the ability to drain the pool? What’s your take on that?

HD: Yeah, and I think that’s a fair point. If someone has the ability to pause it and put safeguards in place, does that also mean they can do anything they want with the funds?

I think the beauty of smart contracts — if you do them right — is that they’re immutable and transparent. You can define strict parameters ahead of time. You can hard-code the rules: when does this get paused, why does it get paused, and what happens to the funds after?

Do they get moved? If so, where? Can they only be moved to a specific location? After the pause, do they get returned? All of that can be encoded. It doesn’t have to be discretionary.

So yes, if you give people full control to do whatever they want, that’s not great. People won’t want to deposit funds into those protocols. But if there are tightly defined parameters over what’s possible — and part of that includes freezing or pausing in the case of an emergency — then that actually gives people more confidence.

Because even the biggest protocols — like Euler, which had a huge TVL — got hacked. And they’d gone through multiple audits, code reviews, the whole thing. But there was still a small vulnerability that someone was able to exploit.

We are getting better at detecting these things, but new issues will always pop up. And like you said, it’s a cat-and-mouse game. You build a defense, then someone finds a new attack. Then you build a new defense, and so on.

CN: Is there anything you’ve been thinking about lately that you think the industry is overlooking?

HD: One of the things we spend a lot of time on internally is trying to make crypto insurance actually accessible — because when you go back to what we’ve been talking about, right? There are always going to be new attacks, and then people will build new defenses. But something has to fill that gap in the meantime.

I think DeFi insurance — like what Nexus Mutual was trying to do — hasn’t really scaled the way people hoped. And a big part of that is because to offer meaningful insurance, you need enormous pools of capital behind it. That’s just how insurance works.

The traditional insurance world already has billions of dollars sitting in reserves. They know how to underwrite risk. If we can bring those players into the crypto space — and give them confidence in how risks are being mitigated — then we unlock something really big.

Because the truth is, if we want big banks or serious financial institutions to get involved in DeFi and on-chain finance, they’re going to need insurance. Full stop.

So if we can enable that — if we can give traditional insurers the tools and data they need to price risk and actually offer coverage — then suddenly, you’ve got a lot more capital that’s comfortable coming into the space.

And when that happens, everything grows. The protocols grow, the infrastructure matures, the users benefit. So yeah — I think unlocking real crypto insurance is one of the most important things we can do right now.

Market Opportunity
Mythos Logo
Mythos Price(MYTH)
$0.0206
$0.0206$0.0206
-1.43%
USD
Mythos (MYTH) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

South Korea Launches Innovative Stablecoin Initiative

South Korea Launches Innovative Stablecoin Initiative

The post South Korea Launches Innovative Stablecoin Initiative appeared on BitcoinEthereumNews.com. South Korea has witnessed a pivotal development in its cryptocurrency landscape with BDACS introducing the nation’s first won-backed stablecoin, KRW1, built on the Avalanche network. This stablecoin is anchored by won assets stored at Woori Bank in a 1:1 ratio, ensuring high security. Continue Reading:South Korea Launches Innovative Stablecoin Initiative Source: https://en.bitcoinhaber.net/south-korea-launches-innovative-stablecoin-initiative
Share
BitcoinEthereumNews2025/09/18 17:54
Trump Cancels Tech, AI Trade Negotiations With The UK

Trump Cancels Tech, AI Trade Negotiations With The UK

The US pauses a $41B UK tech and AI deal as trade talks stall, with disputes over food standards, market access, and rules abroad.   The US has frozen a major tech
Share
LiveBitcoinNews2025/12/17 01:00
Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Turn lengthy earnings call transcripts into one-page insights using the Financial Modeling Prep APIPhoto by Bich Tran Earnings calls are packed with insights. They tell you how a company performed, what management expects in the future, and what analysts are worried about. The challenge is that these transcripts often stretch across dozens of pages, making it tough to separate the key takeaways from the noise. With the right tools, you don’t need to spend hours reading every line. By combining the Financial Modeling Prep (FMP) API with Groq’s lightning-fast LLMs, you can transform any earnings call into a concise summary in seconds. The FMP API provides reliable access to complete transcripts, while Groq handles the heavy lifting of distilling them into clear, actionable highlights. In this article, we’ll build a Python workflow that brings these two together. You’ll see how to fetch transcripts for any stock, prepare the text, and instantly generate a one-page summary. Whether you’re tracking Apple, NVIDIA, or your favorite growth stock, the process works the same — fast, accurate, and ready whenever you are. Fetching Earnings Transcripts with FMP API The first step is to pull the raw transcript data. FMP makes this simple with dedicated endpoints for earnings calls. If you want the latest transcripts across the market, you can use the stable endpoint /stable/earning-call-transcript-latest. For a specific stock, the v3 endpoint lets you request transcripts by symbol, quarter, and year using the pattern: https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={q}&year={y}&apikey=YOUR_API_KEY here’s how you can fetch NVIDIA’s transcript for a given quarter: import requestsAPI_KEY = "your_api_key"symbol = "NVDA"quarter = 2year = 2024url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={API_KEY}"response = requests.get(url)data = response.json()# Inspect the keysprint(data.keys())# Access transcript contentif "content" in data[0]: transcript_text = data[0]["content"] print(transcript_text[:500]) # preview first 500 characters The response typically includes details like the company symbol, quarter, year, and the full transcript text. If you aren’t sure which quarter to query, the “latest transcripts” endpoint is the quickest way to always stay up to date. Cleaning and Preparing Transcript Data Raw transcripts from the API often include long paragraphs, speaker tags, and formatting artifacts. Before sending them to an LLM, it helps to organize the text into a cleaner structure. Most transcripts follow a pattern: prepared remarks from executives first, followed by a Q&A session with analysts. Separating these sections gives better control when prompting the model. In Python, you can parse the transcript and strip out unnecessary characters. A simple way is to split by markers such as “Operator” or “Question-and-Answer.” Once separated, you can create two blocks — Prepared Remarks and Q&A — that will later be summarized independently. This ensures the model handles each section within context and avoids missing important details. Here’s a small example of how you might start preparing the data: import re# Example: using the transcript_text we fetched earliertext = transcript_text# Remove extra spaces and line breaksclean_text = re.sub(r'\s+', ' ', text).strip()# Split sections (this is a heuristic; real-world transcripts vary slightly)if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1)else: prepared, qna = clean_text, ""print("Prepared Remarks Preview:\n", prepared[:500])print("\nQ&A Preview:\n", qna[:500]) With the transcript cleaned and divided, you’re ready to feed it into Groq’s LLM. Chunking may be necessary if the text is very long. A good approach is to break it into segments of a few thousand tokens, summarize each part, and then merge the summaries in a final pass. Summarizing with Groq LLM Now that the transcript is clean and split into Prepared Remarks and Q&A, we’ll use Groq to generate a crisp one-pager. The idea is simple: summarize each section separately (for focus and accuracy), then synthesize a final brief. Prompt design (concise and factual) Use a short, repeatable template that pushes for neutral, investor-ready language: You are an equity research analyst. Summarize the following earnings call sectionfor {symbol} ({quarter} {year}). Be factual and concise.Return:1) TL;DR (3–5 bullets)2) Results vs. guidance (what improved/worsened)3) Forward outlook (specific statements)4) Risks / watch-outs5) Q&A takeaways (if present)Text:<<<{section_text}>>> Python: calling Groq and getting a clean summary Groq provides an OpenAI-compatible API. Set your GROQ_API_KEY and pick a fast, high-quality model (e.g., a Llama-3.1 70B variant). We’ll write a helper to summarize any text block, then run it for both sections and merge. import osimport textwrapimport requestsGROQ_API_KEY = os.environ.get("GROQ_API_KEY") or "your_groq_api_key"GROQ_BASE_URL = "https://api.groq.com/openai/v1" # OpenAI-compatibleMODEL = "llama-3.1-70b" # choose your preferred Groq modeldef call_groq(prompt, temperature=0.2, max_tokens=1200): url = f"{GROQ_BASE_URL}/chat/completions" headers = { "Authorization": f"Bearer {GROQ_API_KEY}", "Content-Type": "application/json", } payload = { "model": MODEL, "messages": [ {"role": "system", "content": "You are a precise, neutral equity research analyst."}, {"role": "user", "content": prompt}, ], "temperature": temperature, "max_tokens": max_tokens, } r = requests.post(url, headers=headers, json=payload, timeout=60) r.raise_for_status() return r.json()["choices"][0]["message"]["content"].strip()def build_prompt(section_text, symbol, quarter, year): template = """ You are an equity research analyst. Summarize the following earnings call section for {symbol} ({quarter} {year}). Be factual and concise. Return: 1) TL;DR (3–5 bullets) 2) Results vs. guidance (what improved/worsened) 3) Forward outlook (specific statements) 4) Risks / watch-outs 5) Q&A takeaways (if present) Text: <<< {section_text} >>> """ return textwrap.dedent(template).format( symbol=symbol, quarter=quarter, year=year, section_text=section_text )def summarize_section(section_text, symbol="NVDA", quarter="Q2", year="2024"): if not section_text or section_text.strip() == "": return "(No content found for this section.)" prompt = build_prompt(section_text, symbol, quarter, year) return call_groq(prompt)# Example usage with the cleaned splits from Section 3prepared_summary = summarize_section(prepared, symbol="NVDA", quarter="Q2", year="2024")qna_summary = summarize_section(qna, symbol="NVDA", quarter="Q2", year="2024")final_one_pager = f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks — Key Points{prepared_summary}## Q&A Highlights{qna_summary}""".strip()print(final_one_pager[:1200]) # preview Tips that keep quality high: Keep temperature low (≈0.2) for factual tone. If a section is extremely long, chunk at ~5–8k tokens, summarize each chunk with the same prompt, then ask the model to merge chunk summaries into one section summary before producing the final one-pager. If you also fetched headline numbers (EPS/revenue, guidance) earlier, prepend them to the prompt as brief context to help the model anchor on the right outcomes. Building the End-to-End Pipeline At this point, we have all the building blocks: the FMP API to fetch transcripts, a cleaning step to structure the data, and Groq LLM to generate concise summaries. The final step is to connect everything into a single workflow that can take any ticker and return a one-page earnings call summary. The flow looks like this: Input a stock ticker (for example, NVDA). Use FMP to fetch the latest transcript. Clean and split the text into Prepared Remarks and Q&A. Send each section to Groq for summarization. Merge the outputs into a neatly formatted earnings one-pager. Here’s how it comes together in Python: def summarize_earnings_call(symbol, quarter, year, api_key, groq_key): # Step 1: Fetch transcript from FMP url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={api_key}" resp = requests.get(url) resp.raise_for_status() data = resp.json() if not data or "content" not in data[0]: return f"No transcript found for {symbol} {quarter} {year}" text = data[0]["content"] # Step 2: Clean and split clean_text = re.sub(r'\s+', ' ', text).strip() if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1) else: prepared, qna = clean_text, "" # Step 3: Summarize with Groq prepared_summary = summarize_section(prepared, symbol, quarter, year) qna_summary = summarize_section(qna, symbol, quarter, year) # Step 4: Merge into final one-pager return f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks{prepared_summary}## Q&A Highlights{qna_summary}""".strip()# Example runprint(summarize_earnings_call("NVDA", 2, 2024, API_KEY, GROQ_API_KEY)) With this setup, generating a summary becomes as simple as calling one function with a ticker and date. You can run it inside a notebook, integrate it into a research workflow, or even schedule it to trigger after each new earnings release. Free Stock Market API and Financial Statements API... Conclusion Earnings calls no longer need to feel overwhelming. With the Financial Modeling Prep API, you can instantly access any company’s transcript, and with Groq LLM, you can turn that raw text into a sharp, actionable summary in seconds. This pipeline saves hours of reading and ensures you never miss the key results, guidance, or risks hidden in lengthy remarks. Whether you track tech giants like NVIDIA or smaller growth stocks, the process is the same — fast, reliable, and powered by the flexibility of FMP’s data. Summarize Any Stock’s Earnings Call in Seconds Using FMP API was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story
Share
Medium2025/09/18 14:40