Support

Get started, find answers, and troubleshoot issues.

General Questions

What does LGND do?

LGND generates, hosts, and queries embeddings from large Earth observation models to support wide ranging analytics.

What problem is LGND solving?

Whereas previous image classification architectures required hundreds of labeled examples and custom models, LGND — and the advent of large Earth observation models — unlock new efficiencies that achieve the same outcome faster and at a fraction of the cost.

 

Historically image classification models required hundreds or thousands of annotated examples to become expert “recognizers.” Large Earth observation models have evolved to be smarter, faster learners that can accomplish these same tasks with just a handful of examples.

What analytics does LGND support?

LGND can identify bounding boxes for features of interest anywhere in the world. Embeddings can classify images as well as model continuous variables like the amount of biomass above ground.

How do I use LGND?

LGND offers an API as well as two SaaS products: LGND Discover and LGND Studio.

 

The API is best for technical users who need a solution for generating and hosting embeddings to integrate into existing solutions.

 

Our Discover app is built to serve as a research partner to help you explore complex questions about the Earth. This app bridges natural language and Earth embeddings, so you can track locations, trends, and much more. No coding required.

 

Our Studio app is an end-to-end Earth observation analytics service with an interactive user interface to find, measure, and monitor features to build validated datasets. No coding required.

Discover

What is LGND Discover?

LGND Discover lets you search Earth imagery using natural language. Instead of browsing maps manually, you describe what you’re looking for and Discover scans large-scale satellite imagery to surface relevant locations in seconds. It’s powered by LGND’s geofoundational intelligence, which understands both geographic context and visual patterns in imagery.

How do I use the Discover app?

You type a plain-language query into the search bar describing what you’re looking for (a feature or change). You can include where you want to look (a place or region) and / or when you're interested in (recent changes). Some examples include “Find new solar arrays in Texas”, “Find parks with playgrounds in Denver", or “Find homes with tennis courts in Los Angeles”.

Discover automatically retrieves the geographic boundary (for example, the polygon for “Texas”) and searches imagery that intersects both your location and feature of interest

What results do I see from a query in Discover?

Discover previews 4 matches in the chat and displays up to 100 results on the map and in the grid. Each result is color-coded by confidence. Confidence reflects how closely the imagery matches your query based on embedding similarity. 

Can I inspect individual results?

Yes. You can switch to a grid view to see image thumbnails for all results. Selecting a result shows: Image date, Data collection source, Similarity score.

You can also locate each result on the map and use a date slider to view how that location changed across acquisition years.

How do I refine my results?

Discover supports three refinement methods:

 

1. Direct feedback

Use thumbs up/down on results. Re-running the search incorporates your feedback to improve relevance.

 

2. Follow-up prompts

Add constraints like “only in desert regions” or select suggested prompts to narrow intent.

 

3. Manual annotation

Draw examples directly on the map using the annotation tool. You can even label before/after imagery to specify a change pattern.

 

These signals are used immediately to improve subsequent searches.

How does Discover handle change over time?

If your query implies change (such as “new” solar arrays), Discover analyzes imagery across multiple timestamps and looks for transitions, like a feature going from absent to present. The reasoning behind these results can be inspected directly in the app through model reasoning traces.

Technical Support

Which large Earth observation model does LGND use?

LGND currently hosts the Clay foundation model. Additional open-source models will be available in the near future. 

How large of an area can be analyzed?

LGND can be run on any sized area. The unit of analysis is a raster tile. A raster tile represents a single remotely sensed image (satellite, aerial, drone) for a specific location on Earth and acquired at a specific time.

What imagery sources are available?

LGND provides easy access to Sentinel-2, Landsat, and NAIP imagery.

How large are raster tiles?

Chips are typically 256x256 pixels. A Sentinel-2 chip  that is 256 – where each pixel is 10 meters - would therefore have a corresponding area of 2.5km^2. 

How accurate is LGND?

LGND unlocks significant accuracy with just a few reference examples. Accuracy depends on many factors: how much training data is provided, how distinct an object is relative to its surroundings, and how variable the object is over space and time. It is rare for a model to work perfectly out of the box. Like with other AI tools, LGND’s analytics are refined through user prompting and feedback. 

How long does it take to train and run a classification model on LGND?

Typically, model training requires a few minutes to hours. Inference takes a few minutes to hours. Both depend on the number of labels used (training) and the area of interest (inference).

What bands were used for pretraining?

The Clay model was trained on 10 bands from Sentinel-2 imagery, 10 bands from Landsat imagery, and all four bands of NAIP.

Which bands can be used for inference?

Wavelengths are encoded in the model. It can therefore extrapolate to wavelengths that are within or near the ranges used for pretraining.

How frequently can I run a model?

You can run a model as many times as you’d like. If you're studying a phenomenon that changes frequently, you can run your model on each update of imagery. Landsat and Sentinel-2 offer updates roughly every five days. NAIP imagery updates every other year.

How frequently can I update my model’s results?

Models can be run each time new (cloud-free!) imagery becomes available.