Vectors and Rasters

(N.b. this is a refresher from the BAS QGIS Tutorial)

Vectors

Vector data provide a way to represent real world features within the GIS environment. A feature is anything you can see on the landscape. Imagine you are standing on the top of a mountain. Looking down you can see rock outcrop, crevasses, lakes, tents and so on (imagining you are in Antarctica!). Each one of these things would be a feature when we represent them in a GIS Application. This kind of data is often known as discrete data (as opposed to continuous, which we will cover later). Vector features have attributes, which consist of text or numerical information that describe the features.

A vector feature has its shape represented using geometry. The geometry is made up of one or more interconnected vertices. A vertex describes a position in space using an X, Y and optionally Z axis.

Vectors are most commonly thought of as points, (poly)lines and polygons.

Example of points, lines and polygons

Where there is only a single vertex, it is referred to as a point. Examples are station locations, spot heights and ice core locations

Where there are two or more vertices that do not join up at the start and end, a polyline is created. Examples are contours, ship tracks and rivers.

Where three or more vertices are present, and the first and last vertex join up, an enclosed polygon is created. Examples include land cover, sea ice extent and rock outcrop.

Diagram explaining the build of points, lines and polygons

Examples of vector data formats

  • Shapefiles

    • the most common form of vector data within GIS

    • developed and regulated by Esri - proprietary

    • although the term is 'a shapefile', the format actually consists of many different files (between ~3-10), as seen below. You need all of the files for the shapefile to work properly.

  • GeoPackages

    • a newer and less widespread format for vector data, but with many advantages over shapefiles

    • as seen in the screenshot below, they only consist of one .gpkg file

    • it is defined by the Open Geospatial Consortium (OGC) - therefore open and non-proprietary

  • Geodatabases, GeoJSON, GPX, KML, CSV, and many more...

An example of how a geopackage and a shapefile appear in windows explorer

Rasters

Rasters represent data in a different way to vectors. They are a grid of pixels, and each pixel ( sometimes called cells) contains a value that represents the conditions for the area covered by that cell. Some examples of typical raster data include satellite imagery, sea ice concentration and digital elevation models (DEMs).

A raster, with its matrix of pixels. Each pixel represents a geographical region and each pixel has a value.

Raster data is used in a GIS application when we want to display information that is continuous across an area and cannot easily be divided into vector features. The examples of vector data that we discussed before (station locations, contours, coastline) would be very difficult to represent as a raster. They could be saved as a raster, but it would be a big waste of space for all of the pixels that would contain null data.

  • It is sometimes sensible to convert data from vector to raster or from raster to vector, but not very often

Rasters should always be georeferenced (they should know their place on the earth's surface). This usually consists of a coordinate for the top left pixel in the image, the size of each pixel in the X direction and the size of each pixel in the Y direction. From this information, the file knows the location of each of its pixels.

  • Formats for georeferenced rasters can vary. A GeoTiff is a very common format for GIS rasters that has many advantages, but there are many others as well.

  • You can sometimes work with non-georeferenced images, for example, old aerial photographs or paper maps that have been scanned. You can add these to your GIS and then manually 'georeference' them by locating common points in the image and in a known dataset.

How satellite imagery is made?

Remote sensing is the process of detecting and monitoring the physical characteristics of an object by measuring its characteristics at a distance. The measurements taken are usually reflected and emit radiation. For optical imagery, this equals detecting colour and brightness.

The simplest example of remote sensing is taking photos of the Earth's surface from an aircraft or a satellite (with a huge camera).

Digital Globe's WorldView-4 at Lockheed Martin Space Systems' Sunnyvale, California, facility. Credit: Lockheed Martin

An example of the most popular use case of satellite imagery would be Google Maps Satellite view.

Satellite view of BAS, Cambridge site. Credit: Google Maps

What types of imagery we usually use?

The most common is to divide sensors into passive, which capture signals emitted or reflected from an object (like the human eye or a phone camera works), and active, which register the backscattering of their own emitted signal (like bats or dolphins). Among the first group optical sensors are the most popular; and among the second you may encounter Radar, or Synthetic Aperture Radar (SAR), LiDAR (laser), Laser Altimetry, and other technologies.

When choosing data for your project, besides the sensor type, you should consider three resolutions: spectral, spatial, and temporal.

In terms of spectral resolution, among optical sensors we usually differentiate multispectral, which could have tens of broad (from tens to hundreds of nanometres wide) spectral bands, and hyperspectral sensors, which can have hundreds of very narrow (2-10 nanometres) spectral bands. There are different bands (frequencies), on which radar satellites operate as well. SAR has only one frequency, but various compositions of polarisations.

Another way to classify sensors is by the spatial resolution of images (no matter was this image formed by an active or a passive sensor). Borders of the following classes are not fixed and tend to change over time while new sensors are developed:

  • Kilometers – low resolution

  • Tens of meters – medium resolution

  • Up to 10 m – high resolution

  • Less than 1 m – very high resolution (VHR)

While high-resolution and VHR cameras let the detection of very small objects on Earth's surface and create topographic maps, low-resolution sensors overview vast territories daily (or even every 10 minutes, like geostationary meteorological satellites, do).

View from geostationary meteorological Himawari satellite. Credit: NICT, Japan

This leads us to temporal resolution – the frequency of image update, or revisit time. This frequency depends on orbit geometry, sensor field of view, and local conditions (for example, no optical images in the visible part of spectra are taken during the polar night).

Optical Remote Sensing and Passive Sensors

Optical sensors are designed to capture radiation in pre-defined spectral intervals, called bands or channels. The image below shows the distribution of spectral bands across optical part of spectra for Sentinel-2, Landsat-7 & 8. You may notice, that spectral bands are designed to insure 1) differentiation between various objects and/or study of their properties and 2) transparency of Earth's atmosphere for outgoing radiation. The human eye sees the interval between 380 and 780 nm (from violet to red). Satellites usually have at least 3 bands in this interval: Blue, Green, and Red - to create a "True-colour" view.

Spectral bands of Sentinel-2 and Landsat 7 & 8. Spectral signatures of vegetation, soil, and water are depicted in green, red, and blue respectively. Credit: DOI:10.13140/RG.2.2.20077.54245

Optical sensors features

Sensor
Spectral Bands (Number of bands/ Spectral interval, nm)
Spatial Resolution, m
Revisit time (in December over South Rothera)

Sentinel-3 (OLCI)

21 / 400-1020

300

3 times a day

MODIS (Terra & Aqua)

36 / 405 - 965, 3000-15000

250, 500, 1000

2 times a day

Sentinel-2

13 / 443 - 2190

10, 20, 60

once in 3 days

Landsat 8&9

11 / 430 -1380, 10600 - 12510

15, 30, 100

once in 3 days

Maxar WorldView-3

16 / 400 - 1040, 1195 - 2365

0.3, 1.24, 3.70

Airbus Pleiades

6 / 400 - 880

0.3, 1.2

Radar Remote sensing and SAR

Another group of imagery we usually work with is SAR (Synthetic Aperture Radar). SAR technology differs greatly in imagery creation methods, wavelengths, and viewing geometry. This makes it more challenging to use, but it has some advantages.

Active sensors

While optical sensors measure reflected radiation from the Earth's surface from Sun, radar emits a pulse of microwave energy (wavelengths are cm to tens of cm) and detects its reflection from objects. This means that a) SAR sensors are not light-dependent, as they register their own emitted signal; b) microwave signals can go through clouds, as wavelengths are longer then water particles size.

Electromagnetic spectrum Credit: NASA ARSET

Wavelengths

Image formation

Radar registers 3 parameters

  • Amplitude - the strength of the reflected echo Amplitude changes due to:

    • Object size and orientation to signal

    • Surface roughness

    • Dielectric properties of the surface

    • Incidence angle

    • Polarization

    • Frequency or Wavelength

  • Signal phase

  • The time between the pulse generated and the echo received

The latter parameter leads to the necessity of the sensor to look sideways. In another case, two objects at the same distance from the sensor would send the signal back at the same time. This would make it impossible to differentiate them, and therefore, to form an image.

Down-Looking vs. Side-Looking Radar Credit: NASA ARSET

Sideways-looking geometry plus physical properties of radar wave result in geometrical distortions, known as Layover and Foreshortening.

Another feature of SAR images is Speckle - a specific noise-like pattern, which is a result of interference from the many scattering echoes within a resolution cell.

SAR image, representing Foreshortening, Layover and Shadow effects
Optical image

Where do I get a satellite image?

Email [email protected] for any imagery request

Image interpretation examples

This page represents some examples of image interpretation

Crevassing

  • Visible on medium to VHR imagery

  • As always with optical: cloud free required

  • Low sun angle can be advantageous

Crevassing in McCallum pass, Sentinel-2 Image, Copernicus Sentinel Data 2023, Processed by ESA

Same area, Google Maps (2024), VHR image.

Same area, Data derived from SAR © [2024] Umbra Lab, Inc. (originally licensed under CC BY 4.0 )

Wildlife

A VHR image of an emperor penguin colony at Halley Bay. An estimated 22,510 penguins were counted at this site. Maxar image, courtesy of the Australian Antarctic Division. Source

Atka Bay penguin colony on a VHR SAR image. You can see ice on the left and smooth ice, probably covered with fresh snow on the right. White swarms on ice are penguins. Same area, Data derived from SAR © [2024] Umbra Lab, Inc. (originally licensed under CC BY 4.0 )

Sastrugi

Sastrugi on Landsat8 image (15 m)

Sea ice

Fast ice

First-year fast-ice at Case Corner, Modified Copernicus Sentinel data, 2024/Sentinel Hub

First-year fast-ice at Case Corner, Modified Copernicus Sentinel data, 2024/Sentinel Hub

Pack ice

Pack Ice and close drift ice at Stange Ice Front, Modified Copernicus Sentinel data, 2024/Sentinel Hub

Pack Ice and close drift ice at Stange Ice Front, Modified Copernicus Sentinel data, 2024/Sentinel Hub

Sea ice around Coronation Island, Sentinel-1 SAR (40m, HH polarization) Tabular bergs (bright white) within mobile sea ice. Dense pack ice (up to 100%) with floes bottom-right corner. Open water in the lee of tabular bergs and Coronation Island towards top of image. Open water (dark) left of image. Ice bergs (bright dots) bottom-left of image.

Last updated