Analyzing the Effects of June 2025 New Mexico Wildfires

NASA monitors wildfire activity and its impact on the landscape
Author

Ethan Kerr

Published

November 11, 2025

🚀 Launch in Disasters-Hub JupyterHub (requires access)

To obtain credentials to VEDA Hub, follow this link for more information.

Disclaimer: it is highly recommended to run a tutorial within NASA VEDA JupyterHub, which already includes functions for processing and visualizing data specific to VEDA stories. Running the tutorial outside of the VEDA JupyterHub may lead to errors, specifically related to EarthData authentication. Additionally, it is recommended to use the Pangeo workspace within the VEDA JupyterHub, since certain packages relevant to this tutorial are already installed.

If you do not have a VEDA Jupyterhub Account you can launch this notebook on your local environment using MyBinder by clicking the icon below.


Binder

Table of Contents

NASA provided satellite imagery at the request of federal and state emergency management officials in response to the Trout Fire near Silver City, New Mexico, in late June 2025. Satellite images assist in search and rescue, evacuation planning, and understanding the scope and development of the fire as it was ongoing.

The Trout Fire was caused by a lightning strike, burned over 47,000 acres, prompted evacuations, and destroyed two homes.

In this notebook, we will explore Sentinel-2, Normalized Burn Ratio Difference (dNBR), and Opera Disturbance Alert datasets, and how they were used in this Disasters article to monitor the effects of wildfires.

Approach

  1. Identify available dates and temporal frequency of observations for collections pertaining to the NM wildfire event
  2. Pass the STAC item into raster API collection endpoint
  3. We’ll visualize tiles for each of the times/dates of interest using folium
  4. We will repeat this process for three different satellite products to show the data capabilities available.

Terminology

Navigating data via the Disasters API, you will encounter terminology that is different from browsing in a typical filesystem. We’ll define some terms here which are used throughout this notebook. - catalog: All datasets available at the /stac endpoint - collection: A specific dataset, e.g. CarbonTracker-CH₄ Isotopic Methane Inverse Fluxes - item: One granule in the dataset, e.g. one monthly file of methane inverse fluxes - asset: A variable available within the granule, e.g. microbial, fossil, or pyrogenic methane fluxes - STAC API: SpatioTemporal Asset Catalogs - Endpoint for fetching metadata about available datasets - Raster API: Endpoint for fetching data itself, for imagery and statistics

Install the Required Libraries

Required libraries are pre-installed on the GHG Center Hub. If you need to run this notebook elsewhere, please install them with this line in a code cell:

%pip install requests folium pystac_client branca matplotlib --quiet

::: {#728bcdf2-f81c-4122-8808-cd46782153fa .cell execution_count=8}
``` {.python .cell-code}
#for querying
import requests
from pystac_client import Client
#for mapping
import folium
import folium.plugins
from folium.plugins import DualMap
from folium import Map, TileLayer
from branca.element import Template, MacroElement
import branca.colormap as cm
import matplotlib.cm as mpl_cm

:::

About the Data: Sentinel-2 True Color/Color IR

The True Color RGB composite provides a product of how the surface would look to the naked eye from space. The RGB is created using the red, green, and blue channels of the respective instrument.

The Color Infrared composite is created using the near-infrared, red, and green channels, allowing for the ability to see areas impacted by the fires. The near-infrared gives the ability to see through thin clouds. Healthy vegetation is shown as red, water is in blue.

These data will allow us to view the burn scar caused by the fire and compare it to the pre-fire landscape.

Query the STAC API for Sentinel-2

# Provide STAC and RASTER API endpoints
STAC_API_URL = "https://dev.openveda.cloud/api/stac"
RASTER_API_URL = "https://dev.openveda.cloud/api/raster"

# Declare collection of interest - sentinel-2 daily data
collection_name = "sentinel-2-all-vars-daily"
# Fetch the collection from the STAC API
catalog = Client.open(STAC_API_URL)
collection = catalog.get_collection(collection_name)
# Print the properties of the collection to the console
collection

By looking at the documentation for the Sentinel-2 imagery for this event we can see the range of dates that are of interest for this event.

# The search function lets you search for items within a specific date/time range
search = catalog.search(
    collections=collection_name,
    datetime=['2025-06-09T00:00:00Z','2025-06-29T00:00:00Z']
)
items = search.item_collection()
# Print how many items we found in our search
print(f"# items found: {len(items)}")
# items found: 10
# Examine the first item in the collection
# Keep in mind that a list starts from 0, 1, 2... therefore items[0] is referring to the first item in the list/collection
items = search.item_collection()
items[0]
# Restructure our items into a dictionary where keys are the datetime items
# Then we can query more easily by date/time, e.g. "2020"
items_dict = {item.properties["datetime"][:10]: item for item in items}

Now we will look for the possible prodcuts to choose from under assets and make a variable to store the name.

asset_name = "colorIR" #or "trueColor"

Fetch Imagery from Raster API for Sentinel-2

There are several dates in dates for this event, but by trial-and-error we can find pre-fire and post-fire images over the Trout Fire.

# Specify two date/times that you would like to visualize, using the format of items_dict.keys()
dates = ["2025-06-09", "2025-06-29"]
# Extract collection name and item ID for the first date
observation_date_1 = items_dict[dates[0]]
collection_id = observation_date_1.collection_id
item_id = observation_date_1.id
# Select relevant asset (microbial CH4 emissions)
object = observation_date_1.assets[asset_name]
raster_bands = object.extra_fields.get("raster:bands", [{}])
# Print the raster bands' information
raster_bands
[{'scale': 1.0,
  'nodata': 0.0,
  'offset': 0.0,
  'sampling': 'area',
  'data_type': 'uint8',
  'histogram': {'max': 255.0,
   'min': 31.0,
   'count': 11,
   'buckets': [1167,
    2047,
    1637,
    1983,
    26820,
    174344,
    172968,
    101871,
    30539,
    17116]},
  'statistics': {'mean': 174.80425718012714,
   'stddev': 27.317252333798095,
   'maximum': 255,
   'minimum': 31,
   'valid_percent': 87.95561863327674}},
 {'scale': 1.0,
  'nodata': 0.0,
  'offset': 0.0,
  'sampling': 'area',
  'data_type': 'uint8',
  'histogram': {'max': 255.0,
   'min': 30.0,
   'count': 11,
   'buckets': [2540,
    6060,
    54441,
    96029,
    140881,
    130869,
    73605,
    13715,
    2127,
    10225]},
  'statistics': {'mean': 137.3293169359764,
   'stddev': 33.809314113255354,
   'maximum': 255,
   'minimum': 30,
   'valid_percent': 87.95561863327674}},
 {'scale': 1.0,
  'nodata': 0.0,
  'offset': 0.0,
  'sampling': 'area',
  'data_type': 'uint8',
  'histogram': {'max': 255.0,
   'min': 30.0,
   'count': 11,
   'buckets': [2777,
    11381,
    131473,
    208710,
    147941,
    13304,
    2085,
    1347,
    1396,
    10078]},
  'statistics': {'mean': 112.29378576868265,
   'stddev': 28.289548883427734,
   'maximum': 255,
   'minimum': 30,
   'valid_percent': 87.95561863327674}}]
observation_date_1
# Make a GET request to retrieve information for your first date/time
tile_pre = requests.get(
    f"{RASTER_API_URL}/collections/{collection_id}/items/{item_id}/WebMercatorQuad/tilejson.json?"
    f"&assets={asset_name}"
).json()

# Print the properties of the retrieved granule to the console
tile_pre
{'tilejson': '2.2.0',
 'version': '1.0.0',
 'scheme': 'xyz',
 'tiles': ['https://dev.openveda.cloud/api/raster/collections/sentinel-2-all-vars-daily/items/sentinel-2-2025-06-09/tiles/WebMercatorQuad/{z}/{x}/{y}@1x?assets=colorIR'],
 'minzoom': 0,
 'maxzoom': 24,
 'bounds': [-108.87342891912581,
  32.37731504449202,
  -106.97869343172056,
  33.46711546874811],
 'center': [-107.92606117542319, 32.92221525662006, 0]}
# Repeat the above for your second date/time
observation_date_2 = items_dict[dates[1]]
# Extract collection name and item ID
collection_id = observation_date_2.collection_id
item_id = observation_date_2.id

# Make a GET request to retrieve information for your second date/time
tile_post = requests.get(
    f"{RASTER_API_URL}/collections/{collection_id}/items/{item_id}/WebMercatorQuad/tilejson.json?"
    f"&assets={asset_name}"
).json()

# Print the properties of the retrieved granule to the console
tile_post
{'tilejson': '2.2.0',
 'version': '1.0.0',
 'scheme': 'xyz',
 'tiles': ['https://dev.openveda.cloud/api/raster/collections/sentinel-2-all-vars-daily/items/sentinel-2-2025-06-29/tiles/WebMercatorQuad/{z}/{x}/{y}@1x?assets=colorIR'],
 'minzoom': 0,
 'maxzoom': 24,
 'bounds': [-108.87342828381848,
  32.37738839044249,
  -106.97877645825533,
  33.46711546874811],
 'center': [-107.9261023710369, 32.922251929595305, 0]}

We will then use the tile URL prepared above to create a simple visualization for both time steps using folium. In the visualization you can zoom in and out of the map’s focus area and compare the burn scar to the pre-fire image side-by-side.

Generate Map for Sentinel-2

We will use the folium package to generate visualizations. folium allows the user to zoom in to see the high-resolution detail of the imagery. The following code block will plot both of our data onto a dual map and fit a title.

# Set initial zoom and map for Trout Fire
m = folium.plugins.DualMap(location=(32.97, -108.15), zoom_start=11)

# June 9 2025
map_layer_pre = TileLayer(
    tiles=tile_pre["tiles"][0],
    attr="VEDA",
    opacity=0.8,
)
map_layer_pre.add_to(m.m1)

# June 29 2025
map_layer_post = TileLayer(
    tiles=tile_post["tiles"][0],
    attr="VEDA",
    opacity=0.8,
)
map_layer_post.add_to(m.m2)

# Properly styled title overlay for DualMap
title_html = f'''
<div style="
position: fixed; 
top: 75px; left: 0; width: 100%;
text-align: center;
font-size: 20px;
font-weight: bold;
background-color: rgba(255, 255, 255, 0.7);
padding: 5px;
z-index: 9999;
">
Sentinel-2 Imagery Pre Fire ({dates[0]}) and Post Fire ({dates[1]})
</div>
'''

m.get_root().html.add_child(folium.Element(title_html))
m
Make this Notebook Trusted to load map: File -> Trust Notebook

Following the same process, we will review visualizing imagery from two more STAC collections. First, we will explore dNBR.

About the Data: Normalized Burn Ratio Difference (dNBR)

NBR is defined mathematically as (NIR – SWIR)/(NIR + SWIR) where NIR is near-infrared and SWIR is short-wave infrared. dNBR is computed by the difference between the pre-fire NBR and the post-fire NBR. NBR is commonly used as a proxy to indicate areas which have charred vegetation. Darker areas (more negative values) in the NBR image more strongly represent the presence of burned vegetation. Since the dNBR considers the condition of the scene before the fire occurred, the resulting value has been used as a proxy for burn severity. Higher dNBR values represent a proxy for greater burn severity. Negative dNBR values may represent a re-greening of or growth of vegetation in between pre and post imagery.

More information on dNBR can be found here: https://un-spider.org/advisory-support/recommended-practices/recommended-practice-burn-severity/in-detail/normalized-burn-ratio.

dNBR data may be computed while the fire is in progress. This is intentionally done to prioritize rapid data availability for proactive disaster response but means data can change over the course of the fire.

dNBR is produced by NASA’s Observational Products for End-Users from Remote Sensing Analysis (OPERA) program, which generates surface products derived from satellite data. Therefore, dNBR data will be found in an opera collection.

Query the STAC API for dNBR

# Fetch STAC collection
collection_name_opera_subdaily = "opera-all-vars-subdaily"
catalog = Client.open(STAC_API_URL)
collection = catalog.get_collection(collection_name_opera_subdaily)
# Print the properties of the collection to the console
collection
# The search function lets you search for items within a specific date/time range
search = catalog.search(
    collections=collection_name_opera_subdaily,
    datetime=['2025-06-09T00:00:00Z','2025-06-29T00:00:00Z']
)
items = search.item_collection()
# Print how many items we found in our search
print(f"# items found: {len(items)}")
# items found: 2
# Examine the first item in the collection
# Keep in mind that a list starts from 0, 1, 2... therefore items[0] is referring to the first item in the list/collection
items = search.item_collection()
items[0]
# Restructure our items into a dictionary where keys are the datetime items
# Then we can query more easily by date/time, e.g. "2020"
items_dict = {item.properties["datetime"][:10]: item for item in items}
asset_name = "dnbr"

Fetch Imagery from Raster API for dNBR

I will choose one of the two dates in dates to visualize.

# Specify date that you would like to visualize, using the format of items_dict.keys()
date = "2025-06-21"

This time, we will look at the rescale values of dNBR to adjust our colormap. dNBR ranges from values of -1 to 1, with more positive values indicating more severe burns.

# Extract collection name and item ID for the first date
observation_date = items_dict[date]
collection_id = observation_date.collection_id
item_id = observation_date.id
# Select relevant asset (microbial CH4 emissions)
object = observation_date.assets[asset_name]
raster_bands = object.extra_fields.get("raster:bands", [{}])
# Print the raster bands' information
raster_bands
[{'scale': 1.0,
  'nodata': -9999.0,
  'offset': 0.0,
  'sampling': 'area',
  'data_type': 'float64',
  'histogram': {'max': 0.9816958355058067,
   'min': -0.1365383543458513,
   'count': 11,
   'buckets': [8923, 258816, 109761, 30963, 7436, 1582, 328, 53, 50, 19]},
  'statistics': {'mean': 0.08097275888625194,
   'stddev': 0.0855684493119729,
   'maximum': 0.9816958355058067,
   'minimum': -0.1365383543458513,
   'valid_percent': 99.24179101642272}}]
#Generate an appropriate color bar range.
rescale_values = {
    "max": raster_bands[0]['statistics']['maximum'],
    "min": raster_bands[0]['statistics']['minimum'],
}

print(rescale_values)
{'max': 0.9816958355058067, 'min': -0.1365383543458513}
# Choose a colormap for displaying the data
# Make sure to capitalize per Matplotlib standard colormap names
# For more information on Colormaps in Matplotlib, please visit https://matplotlib.org/stable/users/explain/colors/colormaps.html
color_map = "inferno"
# Make a GET request to retrieve information for your first date/time
observation_tile = requests.get(
    f"{RASTER_API_URL}/collections/{collection_id}/items/{item_id}/WebMercatorQuad/tilejson.json?"
    f"&assets={asset_name}"
    f"&color_formula=gamma+r+1.05&colormap_name={color_map.lower()}"
    f"&rescale={rescale_values['min']},{rescale_values['max']}"
).json()

# Print the properties of the retrieved granule to the console
observation_tile
{'tilejson': '2.2.0',
 'version': '1.0.0',
 'scheme': 'xyz',
 'tiles': ['https://dev.openveda.cloud/api/raster/collections/opera-all-vars-subdaily/items/opera-2025-06-21T18:05:00/tiles/WebMercatorQuad/{z}/{x}/{y}@1x?assets=dnbr&color_formula=gamma+r+1.05&colormap_name=inferno&rescale=-0.1365383543458513%2C0.9816958355058067'],
 'minzoom': 0,
 'maxzoom': 24,
 'bounds': [-108.23813471556193,
  32.87367428279499,
  -108.01506342300563,
  33.04157419951851],
 'center': [-108.12659906928377, 32.95762424115675, 0]}

Generate Map for dNBR

We will use the folium package once again, but this time we also add code to generate a colorbar.

# --- Create the DualMap ---
m = Map(
    tiles="OpenStreetMap",
    location=[
        32.97,
        -108.15,
    ],
    zoom_start=12,
)

map_layer = TileLayer(
    tiles=observation_tile["tiles"][0],
    attr="VEDA",
    opacity=0.6,
)

map_layer.add_to(m)

# --- Add title ---
title_html = f'''
<div style="
position: fixed; 
top: 75px; left: 0; width: 100%;
text-align: center;
font-size: 20px;
font-weight: bold;
background-color: rgba(255, 255, 255, 0.7);
padding: 5px;
z-index: 9999;
">
Burn Severity Map (dNBR) on {date}
</div>
'''
m.get_root().html.add_child(folium.Element(title_html))

# Get the matplotlib colormap (same as your API color_map)
mpl_colormap = mpl_cm.get_cmap(color_map.lower())

# Create a Branca LinearColormap using the same range
colormap = cm.LinearColormap(
    colors=[mpl_colormap(i) for i in range(mpl_colormap.N)],
    vmin=rescale_values['min'],
    vmax=rescale_values['max']
)
colormap.caption = "dNBR"

# --- Use to_step() to get stable HTML ---
colormap_step = colormap.to_step(n=50)
colorbar_html = colormap_step._repr_html_()

# --- Wrap and fix position (bottom-left) ---
fixed_colorbar = f'''
<div style="
position: fixed;
bottom: 30px;
left: 30px;
width: 220px;
z-index: 9999;
">
{colorbar_html}
</div>
'''
m.get_root().html.add_child(folium.Element(fixed_colorbar))

m
/var/folders/jq/m05tkv2d1llbn0w9wc6cltyr0000gn/T/ipykernel_90434/1287173695.py:37: MatplotlibDeprecationWarning: The get_cmap function was deprecated in Matplotlib 3.7 and will be removed in 3.11. Use ``matplotlib.colormaps[name]`` or ``matplotlib.colormaps.get_cmap()`` or ``pyplot.get_cmap()`` instead.
  mpl_colormap = mpl_cm.get_cmap(color_map.lower())
Make this Notebook Trusted to load map: File -> Trust Notebook

About the Data: VEG-ANOM-MAX

Finally, we will explore changes in vegetation cover.

VEG-ANOM-MAX is derived from OPERA Disturbance Alert - Harmonized Landsat Sentinel-2 data. It measures the difference between historical and current year observed vegetation cover at the date of maximum decrease (vegetation loss of 0-100%). This layer can be used to threshold vegetation disturbance per a given sensitivity (e.g. disturbance of >=20% vegetation cover loss). The sum of the historical percent vegetation and the anomaly value will be the vegetation cover estimate for the current year.

The process for visualization will exactly follow prior examples.

Query the STAC API for Max Vegetation Anomaly

# Fetch the collection from the STAC API
collection_name_opera_daily = "opera-all-vars-daily"
catalog = Client.open(STAC_API_URL)
collection = catalog.get_collection(collection_name_opera_daily)
# Print the properties of the collection to the console
collection
# The search function lets you search for items within a specific date/time range
search = catalog.search(
    collections=collection_name_opera_daily,
    datetime=['2025-06-09T00:00:00Z','2025-06-29T00:00:00Z']
)
items = search.item_collection()
# Print how many items we found in our search
print(f"# items found: {len(items)}")
# items found: 4
# Examine the first item in the collection
# Keep in mind that a list starts from 0, 1, 2... therefore items[0] is referring to the first item in the list/collection
items = search.item_collection()
items[0]
# Restructure our items into a dictionary where keys are the datetime items
# Then we can query more easily by date/time, e.g. "2020"
items_dict = {item.properties["datetime"][:10]: item for item in items}

I will use the colormap under VEG-ANOM-MAX.

asset_name = "VEG-ANOM-MAX"

Fetch Imagery from Raster API for Max Vegetation Anomoly

date = "2025-06-24"
# Extract collection name and item ID for the first date
observation_date= items_dict[date]
collection_id = observation_date.collection_id
item_id = observation_date.id
# Select relevant asset (microbial CH4 emissions)
object = observation_date.assets[asset_name]
raster_bands = object.extra_fields.get("raster:bands", [{}])
# Print the raster bands' information
raster_bands
[{'scale': 1.0,
  'nodata': 0.0,
  'offset': 0.0,
  'sampling': 'area',
  'data_type': 'uint8',
  'histogram': {'max': 255.0,
   'min': 10.0,
   'count': 11,
   'buckets': [3681, 653, 63, 3, 0, 0, 0, 0, 0, 55864]},
  'statistics': {'mean': 238.1747477764503,
   'stddev': 60.02815956239316,
   'maximum': 255,
   'minimum': 10,
   'valid_percent': 5.792476624015748}}]
#Generate an appropriate color bar range.
rescale_values = {
    "max": raster_bands[0]['statistics']['maximum'],
    "min": raster_bands[0]['statistics']['minimum'],
}

print(rescale_values)
{'max': 255, 'min': 10}
# Choose a colormap for displaying the data
# Make sure to capitalize per Matplotlib standard colormap names
# For more information on Colormaps in Matplotlib, please visit https://matplotlib.org/stable/users/explain/colors/colormaps.html
color_map = "magma"

For this situation, I am hard-coding the maximum rescale to 100, because the values above 100 (255) are erroneous (no data) values.

# Make a GET request to retrieve information for your first date/time
observation_tile = requests.get(
    f"{RASTER_API_URL}/collections/{collection_id}/items/{item_id}/WebMercatorQuad/tilejson.json?"
    f"&assets={asset_name}"
    f"&color_formula=gamma+r+1.05&colormap_name={color_map.lower()}"
    f"&rescale={rescale_values['min']},100"
).json()

# Print the properties of the retrieved granule to the console
observation_tile
{'tilejson': '2.2.0',
 'version': '1.0.0',
 'scheme': 'xyz',
 'tiles': ['https://dev.openveda.cloud/api/raster/collections/opera-all-vars-daily/items/opera-2025-06-24/tiles/WebMercatorQuad/{z}/{x}/{y}@1x?assets=VEG-ANOM-MAX&color_formula=gamma+r+1.05&colormap_name=magma&rescale=10%2C100'],
 'minzoom': 0,
 'maxzoom': 24,
 'bounds': [-108.97616786143968,
  31.475062809632632,
  -107.00321128236091,
  33.46381942656848],
 'center': [-107.9896895719003, 32.469441118100555, 0]}
params = {
    "assets": "VEG-ANOM-MAX",
    "rescale": "0,100",
    "colormap_name": "ylorrd"
}
tile = requests.get(
    f"{RASTER_API_URL}/collections/{collection_name_opera_daily}/items/{f'opera-{date}'}/WebMercatorQuad/tilejson.json?",
    params=params,
).json()
tile
{'tilejson': '2.2.0',
 'version': '1.0.0',
 'scheme': 'xyz',
 'tiles': ['https://dev.openveda.cloud/api/raster/collections/opera-all-vars-daily/items/opera-2025-06-24/tiles/WebMercatorQuad/{z}/{x}/{y}@1x?assets=VEG-ANOM-MAX&rescale=0%2C100&colormap_name=ylorrd'],
 'minzoom': 0,
 'maxzoom': 24,
 'bounds': [-108.97616786143968,
  31.475062809632632,
  -107.00321128236091,
  33.46381942656848],
 'center': [-107.9896895719003, 32.469441118100555, 0]}

Generate Map for Max Vegetation Anomoly

We will use the folium package once again with the same format at the dNBR visualization.

m = Map(
    tiles="OpenStreetMap",
    location=[
        32.97,
        -108.15,
    ],
    zoom_start=12,
)

map_layer = TileLayer(
    tiles=observation_tile["tiles"][0],
    attr="VEDA",
    opacity=0.6,
)

map_layer.add_to(m)

# --- Add title ---
title_html = f'''
<div style="
position: fixed; 
top: 75px; left: 0; width: 100%;
text-align: center;
font-size: 20px;
font-weight: bold;
background-color: rgba(255, 255, 255, 0.7);
padding: 5px;
z-index: 9999;
">
Maximum Loss of Vegetation {date}
</div>
'''
m.get_root().html.add_child(folium.Element(title_html))

# Get the matplotlib colormap (same as your API color_map)
mpl_colormap = mpl_cm.get_cmap(color_map.lower())

# Create a Branca LinearColormap using the same range
colormap = cm.LinearColormap(
    colors=[mpl_colormap(i) for i in range(mpl_colormap.N)],
    vmin=rescale_values['min'],
    vmax=100
)
colormap.caption = "Vegetation Loss (%)"

# --- Use to_step() to get stable HTML ---
colormap_step = colormap.to_step(n=50)
colorbar_html = colormap_step._repr_html_()

# --- Wrap and fix position (bottom-left) ---
fixed_colorbar = f'''
<div style="
position: fixed;
bottom: 30px;
left: 30px;
width: 220px;
z-index: 9999;
">
{colorbar_html}
</div>
'''
m.get_root().html.add_child(folium.Element(fixed_colorbar))

m
/var/folders/jq/m05tkv2d1llbn0w9wc6cltyr0000gn/T/ipykernel_90434/3402120942.py:36: MatplotlibDeprecationWarning: The get_cmap function was deprecated in Matplotlib 3.7 and will be removed in 3.11. Use ``matplotlib.colormaps[name]`` or ``matplotlib.colormaps.get_cmap()`` or ``pyplot.get_cmap()`` instead.
  mpl_colormap = mpl_cm.get_cmap(color_map.lower())
Make this Notebook Trusted to load map: File -> Trust Notebook

Some of the areas with the most significant vegetation loss line up with the areas of peak dNBR. This is one of the utilties of these data: you can draw connections between different datasets.

Summary

In this case study we have successfully visualized how NASA monitors wildfires with several satellite products observing the June 2025 Trout Fire in New Mexico. We demonstrated how to query the STAC collections and raster API to gather satellite imagery of a disaster. Using three satellite products, we could see areas that were ongoing signifcant burning (dNBR), areas that lost significant vegetation (VEG-ANOM-MAX), and what the burn scar looked like (Sentinel-2 color IR and true color). Using the various products, we can analyze how the fire was evolving and how it impacted the area, and how things like the areas of burning related to the lost vegetation spatially.

Back to top