What Rubik's Cube Mosaics Taught Me About Satellite Imagery¶
When I was 13 or 14, I used to compete in Rubik's Cube (speedcubing) competitions. I remember it used to really irk me as an older teenager that my "World Cube Association" profile was the first thing that came up when you searched my name on Google. That page has long since been search-optimized away, but a Rubik's cube still sits on my windowsill. When you would go to these events, occasionally someone would be creating a mural using the cubes. It was usually of something tacky like a portrait of Einstein. I've been thinking about these murals recently, especially as someone who spends a lot of their time looking at maps.
One thought was that these mosaics are a uniform grid of 6 discrete values. They are not like the mosaic that you walk by on a subway platform, a shimmering tapestry of so many jagged ceramic hues. In that sense, the Rubik's cube murals are photographic, less abstract. I never knew how they generated the plans for the mosaics, but a cursory search shows that there are tools online to convert PNGs to mosaic schematics.
These images are visual representations of a raster, like any satellite scene you might come across. So, I decided to create this tool to convert a geotiff to a 6-color Rubik's cube mosaic. The results highlight many of the challenges we face in producing and visualizing remote sensing imagery.
First, we need to export some images. I wrote the following script using Google Earth Engine to find cloud-free images in three ~3500km2 AOIs and export to GDrive. Top 10 cloud-free images from L8 and L9 are merged and mosaicked to prevent incomplete scenes or NoData pixels.
import ee
ee.Authenticate()
ee.Initialize(project = "xxxxxxxxx")
AOIS = {
'gibraltar': [-5.946398, 35.748726, -5.295299, 36.258071],
'juneau': [-134.805234, 58.279013, -134.132036, 58.633128],
'nyc': [-74.271899, 40.492909, -73.701649, 40.913051],
}
lc08 = ee.ImageCollection('LANDSAT/LC08/C02/T1_L2')
lc09 = ee.ImageCollection('LANDSAT/LC09/C02/T1_L2')
for name, bbox in AOIS.items():
geometry = ee.Geometry.Rectangle(bbox)
landsat = (lc08.merge(lc09)
.filterBounds(geometry)
.filterDate('2023-06-01', '2025-09-01')
.sort('CLOUD_COVER')
.limit(10)
.mosaic())
scaled = (landsat.select(['SR_B4', 'SR_B3', 'SR_B2'])
.multiply(0.0000275).add(-0.2).clamp(0, 1).multiply(255))
task = ee.batch.Export.image.toDrive(
image=scaled,
description=f'landsat_{name}',
crs='EPSG:4326',
scale=30,
region=geometry)
task.start()
After exporting, true color visualizations for those 3 scenes look like this:
As the Einstein/Cobain example shows, there are two approaches we can take when visualizing images within the constraints of the 6 colors of a Rubik's cube.
The most common approach is that used in the Einstein mosaic, where the image is essentially flattened into a grayscale and the colors are used as proxies for different values/lightness, which usually results in an image with only 4-5 colors, as white>>yellow>>orange>>red>>blue become a spectrum from light to dark areas of the image. The official Rubik's cube color scheme includes the following colors:
- Spanish Green (#009B48)
- Guardsman Red (#B90000)
- Cobalt Blue (#0045AD)
- Willpower Orange (#FF5900)
- Full White (#FFFFFF)
- Gold (#FFD500)
With Blue and Green both having a similar lightness, green is often dropped entirely as it can obscure the image.
The other approach, which we see used in the Cobain example, is to mimic the effect of an RGB image and trick the eye into thinking it's seeing colors that are not on the cube by calculating the Euclidean Distance between colors or using a dithering algorithm like Floyd-Steinberg. The functions below preprocess rasters, convert to 500xN array, apply one of the conversions discussed above, and produce a PNG output. Floyd-Steinberg dithering is an optional toggle in the RGB mode. FS dithering alg adapted from https://scipython.com/blog/floyd-steinberg-dithering/.
import argparse
import numpy as np
import rasterio
from PIL import Image
CUBE_COLORS = {
'green': (0, 155, 72), # Hex: #009B48
'red': (185, 0, 0), # Hex: #B90000
'blue': (0, 69, 173), # Hex: #0045AD
'orange': (255, 89, 0), # Hex: #FF5900
'white': (255, 255, 255), # Hex: #FFFFFF
'yellow': (255, 213, 0), # Hex: #FFD500
}
def load_tiff(path):
with rasterio.open(path) as src:
data = src.read([1, 2, 3]) # shape: (3, H, W)
rgb = np.stack([data[0],data[1],data[2]], axis = -1)
return rgb
def preprocess(rgb):
valid = rgb[rgb > 0]
p2 = np.percentile(valid, 2)
p98 = np.percentile(valid, 98)
stretched = (rgb-p2)/(p98-p2)
clipped = np.clip(stretched, 0, 1)
scaled = np.nan_to_num(clipped * 255, nan=0)
return scaled.astype(np.uint8)
def resize_to_cubes(img, n_wide = 500):
h, w = img.shape[:2]
n_tall = round(n_wide * h / w)
pil_img = Image.fromarray(img)
resized = pil_img.resize((n_wide, n_tall), Image.Resampling.BOX)
return np.asarray(resized)
def quantize_grayscale(img):
lum = 0.2126*img[:,:,0] + 0.7152*img[:,:,1] + 0.0722*img[:,:,2]
palette = np.array(list(CUBE_COLORS.values())) # shape (6, 3)
palette_lum = 0.2126*palette[:,0] + 0.7152*palette[:,1] + 0.0722*palette[:,2]
diff = np.abs(lum[:,:,np.newaxis] - palette_lum) # shape (H, W, 6)
indices = np.argmin(diff, axis=-1) # shape (H, W)
result = palette[indices] # shape (H, W, 3)
return result
def quantize_rgb(img):
pixels = img.reshape(-1, 3).astype(np.float32)
palette = np.array(list(CUBE_COLORS.values()), dtype=np.float32) # shape (6, 3)
diff = pixels[:, np.newaxis, :] - palette[np.newaxis, :, :] # shape (N, 6, 3)
distances = np.sqrt(np.sum(diff**2, axis=-1)) # shape (N, 6)
indices = np.argmin(distances, axis=-1) # shape (N,)
result = palette[indices].reshape(img.shape) # shape (H, W, 3)
return result
def quantize_rgb_dither(img):
palette = np.array(list(CUBE_COLORS.values()), dtype=np.float32)
buf = img.astype(np.float32).copy()
result = np.zeros_like(img)
h, w = img.shape[:2]
for y in range(h):
for x in range(w):
pixel = buf[y, x]
dists = np.sqrt(np.sum((pixel - palette)**2, axis=1))
idx = np.argmin(dists)
result[y, x] = palette[idx]
error = pixel - palette[idx]
if x+1 < w:
buf[y, x+1] += error * 7/16
if y+1 < h:
if x-1 >= 0:
buf[y+1, x-1] += error * 3/16
buf[y+1, x ] += error * 5/16
if x+1 < w:
buf[y+1, x+1] += error * 1/16
return result
def render(quantized, cube_px = 3):
big = np.repeat(np.repeat(quantized, cube_px, axis=0), cube_px, axis=1)
return Image.fromarray(big.astype(np.uint8))
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--mode', choices=['grayscale', 'rgb', 'truecolor'], required=True)
parser.add_argument('--input', required=True)
parser.add_argument('--output', required=True)
parser.add_argument('--n-wide', type=int, default=500)
parser.add_argument('--cube_px', type=int, default=3)
parser.add_argument('--dither', action='store_true')
args = parser.parse_args()
img = load_tiff(args.input)
img = preprocess(img)
img = resize_to_cubes(img, args.n_wide)
if args.mode == 'grayscale':
quantized = quantize_grayscale(img)
elif args.mode == 'rgb':
quantized = quantize_rgb_dither(img) if args.dither else quantize_rgb(img)
else:
quantized = img
result = render(quantized, args.cube_px)
result.save(args.output)
if __name__ == '__main__':
main()
The grayscale-proxy murals were an unexpected favorite of mine. Despite being based off of luminance alone, they're almost useful as land classification maps. You can make out the dense orange neighborhoods of Manhattan and inner Brooklyn, the sprawling yellow industrial core of Queens, the reflective sand of the beaches and tarmac of the airports. You can see the icy flows of the Juneau Icefield and make out the forests and farmland of Andalusia.
The euclidean rgb images are a bit disappointing. It turns out most of the world is blue and green. I do like the composition of the Juneau image, though. I think it got overexposed because we apply the same preprocessing to all images, and didn't mess with the gamma at all.
From afar, you can see how effective the dithering is in creating apparent colors outside of the 6 to which we were limited. These are less fun, in my opinion.
The results are 500xN matrices, with each pixel representing a single piece of a 3x3 Rubik's cube. So with a maximum dimension of 167 rubik's cubes wide, these mosaics would be around 9 meters wide and require 20,000+ rubiks cubes to assemble.
Unless someone would like to sponsor me, I won't be able to build one of these mosaics anytime soon. It exposes a flaw in the economics of Rubik's cube murals that holds back our progress in the development of all manner of optical sensors and displays: increasing resolution requires quadratic increases in pixel/sensor density.
These abstractions aren't too far removed from their original images, which are abstractions in and of themselves: snapshots in time, light which bounced around space and off of New York City, or a glacier, or a boat bound for Tunis, eventually landing squarely on a port-a-potty sized hunk of metal hurtling around our orbit.