This wiki aims at providing reference and hands-on tutorials for new comer users of SurRender. It guides the user through the installation process and introduces important concepts and key functions required to use SurRender properly in a client-server mode, using Python scripts. On the basis of detailed and progressive examples, it allows a fast handling of the essential concepts of the software. A more complete Doxygen documentation is provided with the SurRender package. The wiki contains a Frequently Asked Questions session which will be frequently completed accounting for feedback from users.
This wiki uses various acronyms and abbreviations, a tip bubble should appear when hovering an acronym. If not, you can find a page listing them all in the quick access menu under the miscellaneous section.
Guests can only access this main page. In order to gain access to the wiki, see the following How to get SurRender for more information.
This wiki is powered by Wiki.js. You can navigate using the links provided by the menu, or browse through the wiki file system. A search engine is also provided. Some part of the wiki are only available to SurRender users, ask the administrator if you need access to specific parts. Feel free to provide us with feedback on the wiki and tell us what we should add or if something is not up to date! You can find contact info here. You can also leave a comment under the relevant page.
If you want to acquire SurRender, get in touch with our team!
If you want access to the rest of the wiki, you will need to create an account by clicking the login button. After registration, ask contact us and an Administrator will verify your email and log you as a SurRender User, giving you access to all the tutorials guides and support of the wiki.
SurRender is Airbus advanced image simulator for space applications. You can find the product description in the product sheet given in the SurRender package. PDF support should come to this wiki shortly.
The use of cameras and LiDARs as navigation sensors on space vehicles, and in particular on exploration probes, is an emerging topic of the last 15 years. The development and maturation of this field – known as computer vision – for space exploration require a large amount of images, under a variety of conditions. Real images from actual space exploration missions can rarely be used directly for this task, as the accuracy of the associated ground truth is most of the times insufficient with respect to the needs of computer vision algorithms validation. Image simulation is thus a mandatory step for space exploration-related computer vision projects.
Although several image simulation tools are publicly available, none of them addresses fully the specific challenges of scientific image rendering of space scenes. For this reason, Airbus Defence and Space has been developing since 2011 its own image simulation tool, SurRender. As a scientific image rendering software dedicated to space exploration missions, SurRender aims at a high representativeness and physical accuracy.
(from left to right, top) Asteroid Itokawa, Io and Jupiter, Martian DEM, View of the same place,
(bottom) Earth (re-projection of an EUMETSAT image on a terrestrial spheroid), scene with multiple reflections, view of a DEM rendered with a true Oren-Nayar BRDF.
Since the beginning of its development in 2011, SurRender has been used in many projects, both inside and outside of Airbus Defence and Space. For instance, it is currently used for the development and validation of image processing algorithms for JUICE, the next European mission to Jupiter’s moons1 and MSR ERO. The following sections present the main motivations that make SurRender a relevant choice forspace image simulation.
SurRender images are primarily rendered by raytracing. The physical principles of light propagation (geometric optics) are implemented within the raytracer. Physical optics-level effects, such as diffraction of the light by the aperture of the imaging instrument, are taken into account at macroscopic level through the user-specified Point Spread Function (PSF). Interaction of the light with the surface of the scene objects is modelled in terms of Bidirectional Reflectance Distribution Function (BRDF). The image is rendered in physical units: each pixel contains an irradiance value expressed in W/m².
SurRender’s geometrical modelling of planetary bodies is based on an analytical ellipsoid shape, with custom flattening value, mapped with a DEM. In addition to texture mapping (see hereafter), surface reflectance properties can be finely modelled. SurRender also provides depth maps and normal maps, which can be used as ground truth for algorithm validation, or used as input data for a Lidar model.
The images generated by SurRender are physically accurate, and are thus suitable for scientific applications, from radiometric studies and sensor design with respect to the expected scene content, to the validation of computer vision algorithms performance.
Left: Jovian moons simulated by SurRender
Right: real image from New Horizon mission. Illumination of Jupiter on Io is correctly simulated.
Note the volcanos on Io, not implemented in SurRender
Simulated by SurRender (left) and as imaged by LORRI (right)
SurRender is optimized for space scenarios with huge distances between objects. All computations are performed in double precision, which is sufficient to cover the numerical precision requirements for scene sizes at the scale of the solar system.
Space scenes are sparse: SurRender explicitly takes advantage of this property and is capable of focusing the raytracing directions on energy sources. This drastically reduces the rendering time for the same image quality, making SurRender several orders of magnitude faster than general purpose image renders, while still providing high physical accuracy. Various other optimizations have also been implemented to reduce both noise (raytracing is a stochastic process) and rendering time.
Planetary approach and landing missions pose a specific challenge for an image simulation tool, which must handle planet-size textures at a very high resolution, in order to provide high-quality images from the start of the approach until touch-down. SurRender is able to use “giant textures”, with a custom protocol that maps the texture data to a virtual file up to 256 TB. Although the virtual file is accessed by the software like a regular file, with the same memory and cache optimizations, its content can be either distributed on several files on a hard drive, or across the network. Alternatively, procedural texture generation can be used to generate texture data on-the-fly, make it possible to generate images with a level of details far beyond the original texture.
All elements of the simulation can be customized from surface materials (BRDF) to sensor models. SurRender comes with its own modelling tool, a high-level language derived from C, that easily handle scalar and vectors types. This modelling language can be used to write models for materials, analytical shapes, or sensor electronics. Classical BRDF models are already bundled with SurRender: Lambertian model, Hapke model for moon and asteroids, Oren-Nayar model for the Jovian moons.
The amount of light per spectral band lit by Jupiter depends on the albedo and on the colors of Jupiter’s clouds
Temporal and dynamic aspects of the simulation are also handled by SurRender. This allows simulating physical effects such as motion blur, or effects of a rolling-shutter detector on the acquired image.
Sub-windows at different integration times enable simultaneous observation of faint stars and bright objects
The SurRender simulator has been validated using a thorough validation process. Simulated images were compared to analytical results, both at the radiometric and geometric level. The interested reader shall refer to [RD1] SurRender Validation report, 2nd issue (July 2020) - MIPP.TCN/00989.T.ASTR, which provides both theoretical background and analysis, and test scripts.
SurRender proposes two different rendering modes: raytracing and OpenGL.
Raytracing produces highly representative images, which take into account all the physical models in terms of materials, optics and sensors. Generating an image using raytracing lasts from a few seconds to several minutes depending on the desired image quality and the scene content. SurRender’s raytracing mode is thus suitable to generate highly representative images used to assess the performance and robustness of an algorithm. OpenGL rendering allows very fast image generation, compatible with real-time processing, but as the cost of physical accuracy and fine optics/sensor modelling. This rendering mode is thus suitable for real-time setups, which do not focus of fine functional performance.
SurRender generates images using backward raytracing. The content of a pixel is determined by casting several rays originating from this pixel, and finding which real world objects this ray finally encounters, taking into account possible diffusion and reflections on surfaces, until the ray finally intersects with a light source, usually the sun.
Raytracing is thus a stochastic process, in which rays are cast recursively from pixel to light source, according to the physical properties of the encountered objects. In order to reduce the noise intrinsic to this stochastic process, a great number of rays must be traced. This process is computationally intensive, and the raytracer software must be heavily optimized to ensure the maximum image quality (lowest noise) within a minimum rendering time.
To this end, SurRender implements several optimizations designed specifically for space scenarios. First, space scenes are usually sparse: SurRender explicitly targets the raytracing toward scene objects, thus avoiding a lot of unnecessary computation. In order to speed up calculations, objects self-visibility maps are also precomputed. These maps provide the directions in which the object lies, for each point on the surface of the object. Using this map, it is possible to focus the ray casting in directions that contributes the final image.
SurRender also implements forward raytracing functionalities. In forward raytracing, light rays are propagated from light sources to sensor. One interesting possibility involving forward raytracing is photon mapping. Photon mapping process provides maps of received surface radiances for all objects in the scene. This information can be used to better simulate the scene global illumination, or as a product in itself.
All optical systems – telescopes, cameras, or the human eye – have a common characteristic usually designated under the term Point Spread Function (PSF). PSF is, literally, the response of the optical system to an input scene composed of a dimensionless, point-like light source, and this characteristic is of prime importance for the physics of image formation, as we will see in this section.
Optical systems have a finite aperture. This means that the light that is collected to form the image comes from a finite collecting surface: the primary mirror of the telescope, the lens of the camera, the pupil of the eye. According to the laws of wave optics, light coming through a finite aperture is subjected to diffraction. The light scattering caused by the diffraction is often the primary – but not the only – contributor to the PSF.
It must be understood that PSF is not a defect: it is a desirable property of optical instruments: one that must be adequately tuned, but is deeply necessary to the concept of image itself. To understand why, it may be better to use the notion of Modulation Transfer Function (MTF), which is the dual representation of the PSF phenomenon, in the frequency domain.
Like any real signal, images can be decomposed into an infinite sum of periodic signals, thanks to the Fourier Transform. In order to form a digital image, this signal has to be sampled, which means that instead of a continuous signal, only discrete values will be selected. According to Shannon’s theorem, only frequencies lower than half the sampling frequency can be adequately represented by a sampled signal. This limiting frequency is called Nyquist frequency. Higher frequencies than the Nyquist frequency will be aliased, which means that they will appear in the sampled signal as frequencies not originally present in the real signal. Once these higher frequencies are aliased, there is no way to discriminate them from the real signal, and this leads to visible artifacts in the image. Aliasing is thus an issue that must be prevented, and it must be prevented before the sampling occurs.
To understand how to prevent aliasing, MTF is the important notion. The MTF of an optical instrument is the transfer function that describes the transmission or attenuation of each individual frequency of the input scene through an optical instrument. Like any transfer function describing a real, physical phenomenon, MTF of real instruments tend to attenuate higher frequencies. If this MTF is chosen in a way that it suppresses or at least attenuates strongly any frequency higher than the Nyquist frequency, then the problem of aliasing is solved: high frequency from the input scene are not transmitted through the optical instrument, and are thus not aliased during sampling.
Obviously, we are not in capacity to choose completely the shape and values of an optical MTF. These are tightly linked to the optical design. It is thus the joint task of optical design experts and image quality experts to define the best design, which fulfills the requirements in terms of aliasing – and other constraints not mentioned here.
The important and practical conclusions of this section are twofold:
1- There is no real (or realistic) image without PSF.
2- PSF must be taken into account before sampling.
SurRender takes these requirements into account, and implements PSF within the raytracing process, before sampling. Note that SurRender is able to use models of a PSF that varies in the FOV.
In computer vision, the objective of the image processing is often to extract information from the image geometry: track features, match edges, among others. In this context, one may easily forget that the capability of the algorithms to extract such information depends on notions such as texture, contrast, noise, which all are notions coming of a broader field called radiometry.
The objective of SurRender is to simulate this radiometry in an accurate and physically correct way. This means that SurRender is able to determine the flux received by each pixel, and to express this flux in physical units such as W/m² or number of photons, and to convert these physical values into a digital image.
SurRender’s capability to handle quantitative radiometry ensures that the content of the digital images is representative, in terms of signal and noise level. The behavior of the algorithms on these simulated images is then similar to what could be expected on the real image in a real mission, making SurRender fully suitable to the validation of image-processing and vision-based navigation algorithms.
The recommended way of using SurRender is through its client-server mode: SurRender’s main application runs a server, which can be located on the same computer as the client part, or on a remote computer. The server receives commands from the client trough a TCP/IP link, and sends the resulting image back. If sent over a network, images can be compressed to reduce the transmission duration.
The client, on its side, is developed by the user using the various API provided. Especially, one can call the SurRender API from Lua, C++11, Matlab (with or without Simulink) or Python3. A faster C interface for real-time applications is also available.
Note that besides the client-server mode, SurRender main application (or “server”) can run Lua
scripts directly, without using any TCP/IP link.