Get .NET crash dumps from AWS ECS Fargate automatically: leverage Amazon EFS, Amazon S3, AWS DataSync, and AWS Lambda to make debugging easier for your dev teamGet .NET crash dumps from AWS ECS Fargate automatically: leverage Amazon EFS, Amazon S3, AWS DataSync, and AWS Lambda to make debugging easier for your dev team

How To Send .NET Crash Dumps To Slack From ECS Fargate Task

Sometimes .NET applications crash in production, and nobody knows why, because logs and metrics are ok. It's quite bothersome and makes debugging very unpleasant. In such cases, memory dumps might simplify debugging and reduce troubleshooting time from days to minutes.

This article explains how to configure dumps for .NET applications deployed to AWS ECS Fargate and then forward them to the development team in the most convenient and secure way.

\

:::tip In this article, we will create AWS resources, and I will refer to AWS documentation in particular situations. IAC won’t be in our focus. Nevertheless, if you enjoy Terraform as much as I do, you can use open-source AWS modules for each article section. From my side, I can recommend you take a look at two AWS Terraform modules projects:

  • https://github.com/cloudposse
  • https://github.com/terraform-aws-modules

:::

\

Solution architecture

It’s time to take a look at our architecture. I'll start by presuming the dev team isn't considering pulling .NET dumps from storage like EBS or EFS due to its complexity. S3 is much simpler for developers to obtain any type of file, and it perfectly suits our expectations. \n

Aside from that, receiving proactive notifications when a new .NET dump is generated would be quite valuable. For example, I'll use Slack, but other options include Teams, Mattermost, WhatsApp, and so on. To send the notification message we will use Lambda and S3 triggers.

\ And the last, but not least important notice. It’s quite complicated to attach an S3 bucket natively to ECS. For that reason we’ll create a middleware layer built on top of EFS, DataSync, and sidecar ECS container / Lambda function. EFS will be used as an intermediate file storage for all our ECS tasks, Datasync will transfer data from EFS to S3 automatically, and a sidecar container or Lambda will clean-up old data from EFS.

\ EFS Dump Lifecycle & Notification Flow

\ Let's quickly review the diagram:

  1. AWS Lambda deletes old EFS files by the schedule configured in EventBridge.

  2. Alternatively, during ECS Task bootstrap phase, sidecar container janitor removes outdated dumps from EFS and quits.

  3. During .NET application crash, a new dump is created at EFS filesystem, and only after that the process is terminated.

  4. DataSync moves data to S3 after a new file is uploaded to EFS.

  5. When an S3 hook detects a newly uploaded file, AWS Lambda is triggered.

  6. AWS Lambda uses IAM to obtain the necessary secrets from AWS Secret Manager.

  7. AWS Lambda sends a message to Slack via API.

\

Step-by-step implementation

Create ECS Fargate task

In this section we need to create an ECS Fargate Task using a sample .NET application.

Prerequisites

Before we proceed, there are a few steps that need to be completed:

  1. Setup ECS cluster via AWS Console, or Terraform.

  2. An official AWS guide: Creating an Amazon ECS cluster for Fargate workloads

  3. Create an IAM execution role for ECS task. To do it, you can follow this AWS guide. In the scope of this article I will use kvendingoldo-dotnet-crash-dump-demo name for IAM execution role.

This minimal Trust policy for execution role will be enough:

{   "Version": "2012-10-17",   "Statement": [     {       "Effect": "Allow",       "Principal": {         "Service": "ecs-tasks.amazonaws.com"       },       "Action": "sts:AssumeRole"     }   ] }

As well as minimal permissions policy:

{   "Version": "2012-10-17",   "Statement": [     {       "Effect": "Allow",       "Action": [         "ecr:GetAuthorizationToken",         "ecr:BatchCheckLayerAvailability",         "ecr:GetDownloadUrlForLayer",         "ecr:BatchGetImage",         "logs:CreateLogStream",         "logs:PutLogEvents"       ],       "Resource": "*"     }   ] }

\

Create task definition

Once all prerequisites are ready, it’s time to create a minimal Fargate task with a sample .NET app. To do it, follow official AWS guide, and use this task definition json file:

{  "containerDefinitions": [    {      "cpu": 0,      "essential": true,      "image": "mcr.microsoft.com/dotnet/samples:aspnetapp",      "mountPoints": [],      "name": "app",      "portMappings": [        {          "containerPort": 8000,          "hostPort": 8000,          "protocol": "tcp"        }      ],      "systemControls": [],      "volumesFrom": []    }  ],  "cpu": "256",  "executionRoleArn": "kvendingoldo-dotnet-crash-dump-demo",  "family": "kvendingoldo-dotnet-crash-dump-demo",  "memory": "512",  "networkMode": "awsvpc",  "placementConstraints": [],  "requiresCompatibilities": ["FARGATE"],  "volumes": [],  "tags": [] }

\

Configure .NET dumps

By default, .NET apps do not generate any dumps. To configure it, we must set the following environment variables:

\

# Forces the runtime to generate a stack dump on unhandled exceptions. COMPlus_StackDumpOnUnhandledException=1 # Enable mini dump generation on crash COMPlus_DbgEnableMiniDump=1 # Choose dump type:  # 1 = Mini,  # 2 = Full (use carefully) # 4 = Triage (includes stack, threads, and some heap info — a good balance for debugging). COMPlus_DbgMiniDumpType=2 # Target path for dump file (EFS is mounted here) COMPlus_DbgMiniDumpName=/dumps/dump-%e-%p-%t.dmp

These variables can be added directly to the Dockerfile or defined as environment variables in the ECS Task Definition json.

In our example, let's inject them into the ECS task specification. To accomplish this, we'll add them to the containerDefinitions[0].environment, as shown below:

\

"environment": [    {        "name": "COMPlus_StackDumpOnUnhandledException",        "value": "1"    },    {        "name": "COMPlus_DbgMiniDumpType",        "value": "4"    },    {        "name": "COMPlus_DbgEnableMiniDump",        "value": "1"    },    {        "name": "COMPlus_DbgMiniDumpName",        "value": "/dumps/%t-kvendingoldo-dotnet-demo-crash.dmp"    } ]

\

:::tip As you can see, I use a few placeholders in COMPlus_DbgMiniDumpName. Dotnet automatically expands the following placeholders in the dump file name:

  • %e - executable name
  • %p - process ID
  • %t - timestamp

\ See these two links for further information on collecting and analyzing .NET crash dumps: 

  • Collect .NET Crash Dumps (Microsoft Learn)
  • Debugging .NET Core memory issues (on Linux) with dotnet dump

:::

\

Create EFS storage and mount it to the ECS Fargate Task

As I mentioned at the beginning of this article, attaching an S3 bucket to an ECS job is quite difficult; instead, we will use Amazon EFS (Elastic File System) as intermediate storage for.NET dump files, which can be easily mounted to a set of ECS tasks.

:::tip To create EFS storage, follow the official AWS guide: Amazon ECS Tutorial: Using Amazon EFS File Systems

:::

There’s nothing special to add to the official documentation. Just make sure that:

  • EFS and ECS Cluster are in the same VPC
  • EFS can be accessed by ECS tasks over NFS (port 2049/tcp). Allow inbound access to NFS ports in the EFS security group to do this.

To mount EFS filesystem into the ECS task we must grant the necessary permissions to the kvendingoldo-dotnet-crash-dump-demo IAM role (pay attention to placeholders):

\

{   "Version": "2012-10-17",   "Statement": [     {       "Sid": "AllowEFSAccess",       "Effect": "Allow",       "Action": [         "elasticfilesystem:ClientMount",         "elasticfilesystem:ClientWrite",         "elasticfilesystem:ClientRootAccess"       ],       "Resource": "arn:aws:elasticfilesystem:<region>:<account-id>:file-system/<filesystem-id>"     }   ] }

As a final step, define the EFS volumes and mount points in your ECS task definition (change fileSystemId fs-xxxxxx with your real File System Id after bootstrapping):

\

"volumes": [   {     "name": "dotnet-dumps",     "efsVolumeConfiguration": {       "fileSystemId": "fs-xxxxxx",       "rootDirectory": "/"     }   } ]

\

"mountPoints": [   {     "containerPath": "/dumps",     "readOnly": false,     "sourceVolume": "dotnet-dumps"   } ]

Configure AWS DataSync to transfer EFS files to S3

DataSync service is a standard AWS tool for transferring data between various types of storage. In our case, it will assist us move.NET dumps from EFS to S3.

To reach our goal, we have to:

  • Create an S3 bucket to store our.NET dumps. Further in this article I’ll use S3 bucket name kvendingoldo-dotnet-demo-crash
  • Use this official doc to create a bucket.
  • Create DataSync
  • Use this official doc to create DataSync.
  • Some service parameters I'll be using:
    • Source: EFS
    • Destination: S3 bucket (e.g., s3://kvendingoldo-dotnet-demo-crash/)
    • Include path filters like /dumps/*
    • Schedule sync every minute \n

Create slack alerts based on AWS Lambda

As was earlier said, alerts about new.NET dumps are extremely helpful for the development team.

From the architecture viewpoint, alerts can be built in a different of ways:

  1. A simple lambda function that sends messages to Slack via API and triggered by S3 events.
  2. Messages are published to an SNS topic using configured S3 event notifications, which then trigger a Lambda function to send the events to Slack.

Since we don't expect a high load, the first option is better for us. In case, if you want to implement the second option use these two links:

  • Terraform module for deploying SNS and Lambda stack
  • A guide for configuring S3 events to SNS

\

:::tip We use Python to send messages into Slack. In this article we’ll send only a link to the S3 file, but in some cases it’s required to send the entire file. Slack API has changed some time ago, and file sending can be a little bit complicated. If you want to know more,  please see the “Uploading files to Slack with Python” article.

:::

\ Ok, let’s build the alerting step by step:

1. Create Slack secret

Create AWS Secret Manager secret kvendingoldo-dotnet-crash-dump-demo with one field: slack_webhook_url. This key should contain a link to your Slack webhook (to learn more about Slack webhook check the official guide).

2. Configure AWS Lambda

We won't go into depth about the creation of AWS Lambda, but we will highlight some key points. To get more fundamental information about AWS Lambda setup, see the official guide.

2.1. Make sure that the Lambda IAM role has permission to read from S3:

{   "Effect": "Allow",   "Action": "s3:GetObject",   "Resource": "arn:aws:s3:::kvendingoldo-dotnet-demo-crash/*" }

2.2: To get a data from AWS Secret manager we have to specify environment variable in AWS Lambda configuration: SECRET_NAME=kvendingoldo-dotnet-demo-crash

2.3: Upload Python code to Lambda

import json import urllib3 import os import boto3 def get_secret(secret_name):    client = boto3.client("secretsmanager")    try:        response = client.get_secret_value(SecretId=secret_name)        if "SecretString" in response:            secret = response["SecretString"]            try:                return json.loads(secret)            except json.JSONDecodeError:                return secret        else:            return response["SecretBinary"]    except Exception as e:        print(f"Error retrieving secret: {e}")        return None def lambda_handler(event, context):    print("Event received:", json.dumps(event))    secret_name = os.environ.get('SECRET_NAME', '')    if secret_name == "":        return {            'statusCode': 500,            'body': json.dumps("SECRET_NAME env variable is empty")        }    secret = get_secret(secret_name)    slack_webhook_url = secret["slack_webhook_url"]    for record in event['Records']:        bucket_name = record['s3']['bucket']['name']        file_name = record['s3']['object']['key']        region = record['awsRegion']        if ".aws" in file_name:            print(f"Skipping internal file: {file_name}")            continue        message = (            f":package: *New .NET dump is uploaded!*\n\n"            f":cloud: Bucket: `{bucket_name}`\n"            f":floppy_disk: File: `{file_name}`\n"            f":link: Link: https://{bucket_name}.s3.{region}.amazonaws.com/{file_name}"        )        http = urllib3.PoolManager()        slack_resp = http.request(            "POST",            slack_webhook_url,            body=json.dumps({                "text": message            }),            headers={                "Content-Type": "application/json"            }        )        if slack_resp.status != 200:            raise Exception(                f"Slack webhook request failed with status {slack_resp.status}: {slack_resp.data.decode('utf-8')}")    return {        "statusCode": 200,        "body": json.dumps("Message has been sent successfully!")    }

2.4: Configure S3 Event Notifications for your S3 bucket. To do this, go to the bucket -> properties -> Event notifications and select "Create event notification". Configure the event using the following options:

  • Event name: kvendingoldo-dotnet-demo-crash

  • Prefix: dumps/

  • Event type: s3:ObjectCreated:*

  • Target: <Your Lambda function Name>

\

Configure EFS storage clean-up

Perfect, the .NET dumps delivery chain is ready, but what’s about the old dump? EFS does not allow us to delete old files using lifecycle policies; we can only transfer them to Infrequent Access storage type which is not enough if we do not want to pay for unnecessary space.

To solve this issue, there are two options:

  1. Create ECS sidecar container that will clean-up old EFS files at the initialization phase
  2. Create Lambda or ECS task that will mount EFS, and clean-up old files by CRON.

Let’s check both of them.

Option 1: AWS Lambda

This is the best solution because it is unaffected by the lifecycle of ECS tasks and other factors. To implement this strategy, you need to create a Lambda function with a mounted EFS storage (learn more about mounting a filesystem to Lambda from the official doc) and the following Python code:

import os import time import json def lambda_handler(event, context):    # Note: you can only mount the filesystem to the /mnt/ directory.    directory = '/mnt/dumps'    # File pattern to match    pattern = 'crash.dmp'    # Time in minutes (by default 1d)    minutes_old = 1440    # Convert minutes to seconds    age_seconds = minutes_old * 60    # Current time    now = time.time()    for root, dirs, files in os.walk(directory):        for file in files:            if pattern in file:                file_path = os.path.join(root, file)                file_mtime = os.path.getmtime(file_path)                if now - file_mtime > age_seconds:                    print(f"Found a file that older than {minutes_old} minutes: {file_path}")                    try:                        os.remove(file_path)                    except Exception as e:                        print(f"Failed to delete {file_path}: {e}")    return {        "statusCode": 200,        "body": json.dumps("EFS clean-up completed successfully!")    }

As you can see, this is a simple code that deletes files from mounted storage that are older than one day.

\n When your Lambda is ready we also need to  configure the CRON trigger to run the function periodically. It can be created usingcloudwatch event rule.

That’s it, after all of these steps your EFS storage will be cleaned up automatically by your CRON schedule.

\

Option 2: ECS sidecar container.

To implement this option we have to to add new container to our task definition:

{  "essential": false,  "name": "janitor",  "image": "public.ecr.aws/amazonlinux/amazonlinux:2",  "command": [    "bash",    "-lc",    "find /dumps -name '*crash.dmp*' -type f -mmin +10080 -print -delete"  ],  "mountPoints": [    {      "containerPath": "/dumps",      "readOnly": false,      "sourceVolume": "dotnet-dumps"    }  ],  "linuxParameters": {    "initProcessEnabled": true  } }

The logic behind this task:

  • Initialize a new ECS task with two containers: app and janitor
  • Clean up outdated EFS files in the janitor container and exit. Regardless, the task will not be interrupted or stopped due to ECS option "essential": false.

As you can see, this technique is quite straightforward and relies on find command, which you can customize. In the example, it deletes files that are older than 10080 minutes (7 days). Of course, this strategy is less desirable than the first when dealing with long-lived ECS tasks, but it may be more convenient for short-lived ECS tasks or prototyping.

\

Testing time

In this section, we won't do a deep dive into the.NET application build. For testing purposes, you can modify the sample aspnetapp that we used in the beginning.

\n The simplest way to cause a crash of .NET isEnvironment.FailFast(). This method is commonly used to simulate hard crashes.

\ Let’s simulate the crash:

  1. Add Environment.FailFast("kvendingoldo-dotnet-demo-crash .NET example crash"); line to dotnet-docker/samples/aspnetapp/aspnetapp/Program.cs file.
  2. Build a new docker image, and re-create the ECS task.
  3. ECS Task will terminate, but first generate a.NET crash dump, which will be available on S3 in a few seconds.
  4. At the final phase, you'll receive a message on your Slack like this:

📦 New .NET dump is uploaded! ☁️ Bucket: kvendingoldo-dotnet-demo-crash 💾 File: 1739104252-kvendingoldo-dotnet-demo-crash.dmp 🔗 Link: https://kvendingoldo-dotnet-demo-crash.s3.us-east-2.amazonaws.com/1739104252-kvendingoldo-dotnet-demo-crash.dmp

\

Possible improvements

Before wrapping-up the article, I'd like to provide some comments on potential changes:

  1. It will be a good idea to generate pre-signed URLs for S3 objects
  2. Set lifecycle policies for S3 bucket to delete old dumps automatically from the bucket
  3. Use SNS to send notifications about new S3 objects to multiple destinations

Conclusion

In production environments, quick visibility into faults is critical. Automating dump delivery reduces MTTR (Mean Time To Resolution) and improves incident response. \n

As you can see, implementing this procedure is not as difficult as you might expect. Yes, we used many AWS services to accomplish these tasks, but when we look deeper, they are all important.

\ I hope this article helped you build a personal dump delivery chain and made your development team happier.

Feel free to modify the proposed approach, and please contact me anytime if you have any questions.

Happy coding!

Piyasa Fırsatı
Suilend Logosu
Suilend Fiyatı(SEND)
$0.2318
$0.2318$0.2318
-2.02%
USD
Suilend (SEND) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen [email protected] ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

South Korea Launches Innovative Stablecoin Initiative

South Korea Launches Innovative Stablecoin Initiative

The post South Korea Launches Innovative Stablecoin Initiative appeared on BitcoinEthereumNews.com. South Korea has witnessed a pivotal development in its cryptocurrency landscape with BDACS introducing the nation’s first won-backed stablecoin, KRW1, built on the Avalanche network. This stablecoin is anchored by won assets stored at Woori Bank in a 1:1 ratio, ensuring high security. Continue Reading:South Korea Launches Innovative Stablecoin Initiative Source: https://en.bitcoinhaber.net/south-korea-launches-innovative-stablecoin-initiative
Paylaş
BitcoinEthereumNews2025/09/18 17:54
Trump Cancels Tech, AI Trade Negotiations With The UK

Trump Cancels Tech, AI Trade Negotiations With The UK

The US pauses a $41B UK tech and AI deal as trade talks stall, with disputes over food standards, market access, and rules abroad.   The US has frozen a major tech
Paylaş
LiveBitcoinNews2025/12/17 01:00
Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Turn lengthy earnings call transcripts into one-page insights using the Financial Modeling Prep APIPhoto by Bich Tran Earnings calls are packed with insights. They tell you how a company performed, what management expects in the future, and what analysts are worried about. The challenge is that these transcripts often stretch across dozens of pages, making it tough to separate the key takeaways from the noise. With the right tools, you don’t need to spend hours reading every line. By combining the Financial Modeling Prep (FMP) API with Groq’s lightning-fast LLMs, you can transform any earnings call into a concise summary in seconds. The FMP API provides reliable access to complete transcripts, while Groq handles the heavy lifting of distilling them into clear, actionable highlights. In this article, we’ll build a Python workflow that brings these two together. You’ll see how to fetch transcripts for any stock, prepare the text, and instantly generate a one-page summary. Whether you’re tracking Apple, NVIDIA, or your favorite growth stock, the process works the same — fast, accurate, and ready whenever you are. Fetching Earnings Transcripts with FMP API The first step is to pull the raw transcript data. FMP makes this simple with dedicated endpoints for earnings calls. If you want the latest transcripts across the market, you can use the stable endpoint /stable/earning-call-transcript-latest. For a specific stock, the v3 endpoint lets you request transcripts by symbol, quarter, and year using the pattern: https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={q}&year={y}&apikey=YOUR_API_KEY here’s how you can fetch NVIDIA’s transcript for a given quarter: import requestsAPI_KEY = "your_api_key"symbol = "NVDA"quarter = 2year = 2024url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={API_KEY}"response = requests.get(url)data = response.json()# Inspect the keysprint(data.keys())# Access transcript contentif "content" in data[0]: transcript_text = data[0]["content"] print(transcript_text[:500]) # preview first 500 characters The response typically includes details like the company symbol, quarter, year, and the full transcript text. If you aren’t sure which quarter to query, the “latest transcripts” endpoint is the quickest way to always stay up to date. Cleaning and Preparing Transcript Data Raw transcripts from the API often include long paragraphs, speaker tags, and formatting artifacts. Before sending them to an LLM, it helps to organize the text into a cleaner structure. Most transcripts follow a pattern: prepared remarks from executives first, followed by a Q&A session with analysts. Separating these sections gives better control when prompting the model. In Python, you can parse the transcript and strip out unnecessary characters. A simple way is to split by markers such as “Operator” or “Question-and-Answer.” Once separated, you can create two blocks — Prepared Remarks and Q&A — that will later be summarized independently. This ensures the model handles each section within context and avoids missing important details. Here’s a small example of how you might start preparing the data: import re# Example: using the transcript_text we fetched earliertext = transcript_text# Remove extra spaces and line breaksclean_text = re.sub(r'\s+', ' ', text).strip()# Split sections (this is a heuristic; real-world transcripts vary slightly)if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1)else: prepared, qna = clean_text, ""print("Prepared Remarks Preview:\n", prepared[:500])print("\nQ&A Preview:\n", qna[:500]) With the transcript cleaned and divided, you’re ready to feed it into Groq’s LLM. Chunking may be necessary if the text is very long. A good approach is to break it into segments of a few thousand tokens, summarize each part, and then merge the summaries in a final pass. Summarizing with Groq LLM Now that the transcript is clean and split into Prepared Remarks and Q&A, we’ll use Groq to generate a crisp one-pager. The idea is simple: summarize each section separately (for focus and accuracy), then synthesize a final brief. Prompt design (concise and factual) Use a short, repeatable template that pushes for neutral, investor-ready language: You are an equity research analyst. Summarize the following earnings call sectionfor {symbol} ({quarter} {year}). Be factual and concise.Return:1) TL;DR (3–5 bullets)2) Results vs. guidance (what improved/worsened)3) Forward outlook (specific statements)4) Risks / watch-outs5) Q&A takeaways (if present)Text:<<<{section_text}>>> Python: calling Groq and getting a clean summary Groq provides an OpenAI-compatible API. Set your GROQ_API_KEY and pick a fast, high-quality model (e.g., a Llama-3.1 70B variant). We’ll write a helper to summarize any text block, then run it for both sections and merge. import osimport textwrapimport requestsGROQ_API_KEY = os.environ.get("GROQ_API_KEY") or "your_groq_api_key"GROQ_BASE_URL = "https://api.groq.com/openai/v1" # OpenAI-compatibleMODEL = "llama-3.1-70b" # choose your preferred Groq modeldef call_groq(prompt, temperature=0.2, max_tokens=1200): url = f"{GROQ_BASE_URL}/chat/completions" headers = { "Authorization": f"Bearer {GROQ_API_KEY}", "Content-Type": "application/json", } payload = { "model": MODEL, "messages": [ {"role": "system", "content": "You are a precise, neutral equity research analyst."}, {"role": "user", "content": prompt}, ], "temperature": temperature, "max_tokens": max_tokens, } r = requests.post(url, headers=headers, json=payload, timeout=60) r.raise_for_status() return r.json()["choices"][0]["message"]["content"].strip()def build_prompt(section_text, symbol, quarter, year): template = """ You are an equity research analyst. Summarize the following earnings call section for {symbol} ({quarter} {year}). Be factual and concise. Return: 1) TL;DR (3–5 bullets) 2) Results vs. guidance (what improved/worsened) 3) Forward outlook (specific statements) 4) Risks / watch-outs 5) Q&A takeaways (if present) Text: <<< {section_text} >>> """ return textwrap.dedent(template).format( symbol=symbol, quarter=quarter, year=year, section_text=section_text )def summarize_section(section_text, symbol="NVDA", quarter="Q2", year="2024"): if not section_text or section_text.strip() == "": return "(No content found for this section.)" prompt = build_prompt(section_text, symbol, quarter, year) return call_groq(prompt)# Example usage with the cleaned splits from Section 3prepared_summary = summarize_section(prepared, symbol="NVDA", quarter="Q2", year="2024")qna_summary = summarize_section(qna, symbol="NVDA", quarter="Q2", year="2024")final_one_pager = f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks — Key Points{prepared_summary}## Q&A Highlights{qna_summary}""".strip()print(final_one_pager[:1200]) # preview Tips that keep quality high: Keep temperature low (≈0.2) for factual tone. If a section is extremely long, chunk at ~5–8k tokens, summarize each chunk with the same prompt, then ask the model to merge chunk summaries into one section summary before producing the final one-pager. If you also fetched headline numbers (EPS/revenue, guidance) earlier, prepend them to the prompt as brief context to help the model anchor on the right outcomes. Building the End-to-End Pipeline At this point, we have all the building blocks: the FMP API to fetch transcripts, a cleaning step to structure the data, and Groq LLM to generate concise summaries. The final step is to connect everything into a single workflow that can take any ticker and return a one-page earnings call summary. The flow looks like this: Input a stock ticker (for example, NVDA). Use FMP to fetch the latest transcript. Clean and split the text into Prepared Remarks and Q&A. Send each section to Groq for summarization. Merge the outputs into a neatly formatted earnings one-pager. Here’s how it comes together in Python: def summarize_earnings_call(symbol, quarter, year, api_key, groq_key): # Step 1: Fetch transcript from FMP url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={api_key}" resp = requests.get(url) resp.raise_for_status() data = resp.json() if not data or "content" not in data[0]: return f"No transcript found for {symbol} {quarter} {year}" text = data[0]["content"] # Step 2: Clean and split clean_text = re.sub(r'\s+', ' ', text).strip() if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1) else: prepared, qna = clean_text, "" # Step 3: Summarize with Groq prepared_summary = summarize_section(prepared, symbol, quarter, year) qna_summary = summarize_section(qna, symbol, quarter, year) # Step 4: Merge into final one-pager return f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks{prepared_summary}## Q&A Highlights{qna_summary}""".strip()# Example runprint(summarize_earnings_call("NVDA", 2, 2024, API_KEY, GROQ_API_KEY)) With this setup, generating a summary becomes as simple as calling one function with a ticker and date. You can run it inside a notebook, integrate it into a research workflow, or even schedule it to trigger after each new earnings release. Free Stock Market API and Financial Statements API... Conclusion Earnings calls no longer need to feel overwhelming. With the Financial Modeling Prep API, you can instantly access any company’s transcript, and with Groq LLM, you can turn that raw text into a sharp, actionable summary in seconds. This pipeline saves hours of reading and ensures you never miss the key results, guidance, or risks hidden in lengthy remarks. Whether you track tech giants like NVIDIA or smaller growth stocks, the process is the same — fast, reliable, and powered by the flexibility of FMP’s data. Summarize Any Stock’s Earnings Call in Seconds Using FMP API was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story
Paylaş
Medium2025/09/18 14:40