SuperF: Neural Implicit Fields for Multi-Image Super-Resolution
Super-Resolving Any Place on Earth
TLDR: This app super-resolves Sentinel-2 optical satellite images (original resolution 10m) using the SuperF approach. The default run uses 4x super-resolution, with higher scale factors available in the settings.
More details about the project: https://sjyhne.github.io/superf/
What happens under the hood? After the user enters a geographic coordinate (latitude, longitude), this application downloads multiple cloud free Sentinel-2 images of the same location. These 10 meter images that have subtle spatial misalignments, which is useful to compute a super-resolved image. Using these images, the SuperF approach optimizes an implicit neural representation (INR) of the shared underlying high-resolution image. SuperF achieves this by i) jointly optimizing the INR with the affine alignment between the individual frames, and by ii) sharing a coordinate-based neural network to represent the high-resolution signal underlying all low-resolution Sentinel-2 images.
Note: The default run uses up to 8 images and 2000 optimization iterations. Processing may take 5-15 minutes depending on the settings, time window, image size, and scale factor.
Location Selection
Try a preset location and date range:
💡 Tip: Press Enter after editing the coordinates to update the map.
Processing Settings
Enable uncertainty estimation during training. Uses GaussianNLLLoss instead of MSE loss.