We wrote a quick ‘n dirty Python API for fetching, filtering and saving the images sent from Mars(!!!) by NASA’s Perseverance rover. This post shows a few different ways we used it to piece together the puzzle of the red planet (since NASA seems to like puzzles).
Some imports and a helper function to plot a grid of images:
import PIL
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
from mars2020 import mars2020api
def display_image_grid(images: [mars2020api.ImageData], columns=3, width=15, height=8, max_images=30):
if len(images) > max_images:
print(f"Showing {max_images} of {len(images)} images")
images = images[:max_images]
height = max(height, int(len(images) / columns) * height)
plt.figure(figsize=(width, height))
for i, image in enumerate(images):
plt.subplot(int(len(images) // columns + 1), columns, i + 1)
plt.imshow(image.image_data)
plt.axis("off")
plt.show()
Fetch all of NASA’s Mars data (this just gets all the image metadata, the actual images are downloaded lazily when requested)
all_data = mars2020api.ImageDataCollection.fetch_all_mars2020_imagedata()
Collage¶
During the descent, the EDL_RDCAM camera continously took a ton of pictures that were perfect for collaging together.
images = [
x for x in all_data.images
if x.camera_type.instrument == "EDL_RDCAM"
and not x.instrument_metadata.thumbnail # Not a thumbnail pic
and x.instrument_metadata.filter_number == "E"
]
len(images)
217
display_image_grid(images[:6])
We used Photoshop’s Photomerge algorithm (had to subsample to a 100 images to keep Photoshop from crashing) to get this absolute beauty:
And similarly for filter_number = F
:
Panorama¶
NASA released a beautiful 360-degree panorama shot by the Mastcam-Z cameras on board. We tried to replicate this by getting the same images and running it through Photomerge again.
NASA’s claims to have used the GigaPan software for this but we couldn’t really get this to work probably because of the ordering of the images.
images = [
x
for x in all_data.images
if x.camera_type.instrument == "MCZ_LEFT" # MastCam Z - Left
and not x.instrument_metadata.thumbnail # Not a thumbnail picture
and x.instrument_metadata.filter_number == "F"
and x.date_received_on_earth_utc.day == 24 # Received on 24th Feb 2021
]
len(images)
143
display_image_grid(images[:6])
The camera_position
and camera_vector
fields show how the panorama was shot.
positions = np.array([x.camera_type.camera_position for x in images])
vectors = np.array([x.camera_type.camera_vector for x in images])
fig = plt.figure(figsize=(12,12))
ax = fig.gca(projection='3d')
ax.quiver(-positions[:, 0], -positions[:, 1], -positions[:, 2],
-vectors[:, 0], -vectors[:, 1], -vectors[:, 2],
length=0.01, linewidth=0.8,
arrow_length_ratio = 0.5,
color="#8959a8")
plt.axis("off")
plt.show()
Here are some results from Photomerge!
RGB¶
A bunch of the cameras took separate R, G, and B channels for each image. We matched these together with the camera_vector
information and composited them.
def match_rgb(r_image: mars2020api.ImageData, g_images: [mars2020api.ImageData], b_images: [mars2020api.ImageData]):
vector = r_image.camera_type.camera_vector
g_image = [x for x in g_images if x.camera_type.camera_vector == vector]
if len(g_image) == 0:
return None
b_image = [x for x in b_images if x.camera_type.camera_vector == vector]
if len(b_image) == 0:
return None
return (r_image, g_image[0], b_image[0])
rgb_matches = []
for cam_type in all_data.instrument_names:
cam_images = [x for x in all_data.images if x.camera_type.instrument == cam_type and not x.instrument_metadata.thumbnail]
filters = set(x.instrument_metadata.filter_number for x in cam_images)
if not set("RGB").difference(filters):
print(cam_type)
r_images = [x for x in cam_images if x.instrument_metadata.filter_number == "R"]
g_images = [x for x in cam_images if x.instrument_metadata.filter_number == "G"]
b_images = [x for x in cam_images if x.instrument_metadata.filter_number == "B"]
rgb_matches += [match_rgb(r_image, g_images, b_images) for r_image in r_images]
len(rgb_matches)
REAR_HAZCAM_LEFT FRONT_HAZCAM_RIGHT_A FRONT_HAZCAM_LEFT_A REAR_HAZCAM_RIGHT NAVCAM_LEFT NAVCAM_RIGHT
47
rgb_images = []
for m in rgb_matches:
rgb_image = np.zeros((m[0].dimension[1], m[0].dimension[0], 3), dtype=np.int8)
for i in range(3):
rgb_image[:, :, i] = np.asarray(m[i].image_data.split()[i])
rgb_images.append(PIL.Image.fromarray(rgb_image, "RGB"))
An example of what that looks like:
See the rest, as well as high-res versions of all of the above in this Flickr album
Next¶
We’re thinking of wrapping this in a simple GUI to filter and explore images. Can’t wait for NASA to upload more images! Let’s see if humanity finds anything about the biggest puzzle.