• Ei tuloksia

Fabry-Perot -based hyperspectral reflectance imaging of asteroids

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Fabry-Perot -based hyperspectral reflectance imaging of asteroids"

Copied!
75
0
0

Kokoteksti

(1)

FABRY–PEROT –BASED HYPERSPECTRAL

REFLECTANCE IMAGING OF ASTEROIDS

Leevi Lind

Master’s Thesis June 2021

Department of Physics and Mathematics

University of Eastern Finland

(2)

Leevi Lind Fabry–Perot –based hyperspectral reflectance imaging of asteroids, 70 pages

University of Eastern Finland

Filosofian maisteri, fysiikka (M.Sc., Physics)

Supervisors Ph.D. Hannu Laamanen, University of Eastern Finland Ph.D. Ilkka P¨ol¨onen, University of Jyv¨askyl¨a

Abstract

The compositions of asteroids are interesting for fields of astronomy, planetary de- fense, and asteroid mining. The primary method of investigating these compositions is reflectance spectroscopy. Reflectance maps of asteroids could be built from hy- perspectral imaging measurements if proper calibration is applied. We propose a measurement calibration method based on accurately recording the absolute spec- tral radiance of light reflected from an asteroid and comparing it to computation- ally evaluated spectral radiance reflected from an ideal Lambertian reflector in an identical lighting geometry. Laboratory measurements modeling the hyperspectral imaging of an asteroid were made with a Fabry–Perot interferometer based imager to evaluate the performance of the proposed calibration method.

The results showed differences between reflectance spectra calculated with the proposed method and reference methods. Results of the proposed method exhibited a dip in reflectance near 570 nm, and higher reflectance throughout the measurement wavelength range, especially in the infrared area. Additional characterization mea- surements of the used hyperspectral imager point to errors in its calibration: these could cause the differences between test reflectances and reference reflectances. We believe these issues could be solved with proper calibration of the imager.

Keywords: spectral imaging; hyperspectral imaging; imaging; calibration; radio- metry; remote sensing; asteroid; spectroscopy

(3)

Acknowledgments

This research has been carried out at the Spectral Imaging Laboratory of the Uni- versity of Jyv¨askyl¨a, with laboratory measurements performed at the Color Labora- tory of the University of Eastern Finland. The research is funded by the Smart–HSI project of the Academy of Finland (grant number 335615).

Preface

I wish to thank each and every one involved in the process of creating this work.

Special thanks of course belongs to my supervisors Hannu and Ilkka, for their many hours of guidance. Equally important have been my colleagues and friends, who for the last five months have bravely endured innumerable questions and complaints about space rocks, spectral imagers, and Python libraries.

Jyv¨askyl¨a, June 3, 2021 Leevi Lind

(4)

Contents

1 Introduction 1

1.1 Background . . . 1

1.2 Motivation and research questions . . . 2

1.3 Outline. . . 3

2 Theoretical background 4 2.1 Hyperspectral imaging . . . 4

2.2 Fabry–Perot interferometer. . . 7

2.3 Working principle of FPI–based HSIs . . . 9

2.4 Radiometry, photometry, and spectral similarity . . . 11

2.4.1 Radiometry . . . 11

2.4.2 Photometry . . . 14

2.4.3 Similarity of spectra . . . 15

2.5 Radiometric calibration. . . 17

2.5.1 Device calibration. . . 17

2.5.2 Measurement calibration . . . 18

3 Measurements 22 3.1 Imager . . . 22

3.2 HSI characterization . . . 24

3.3 Asteroid imaging . . . 27

(5)

4 Calculations 34

4.1 Pre-processing . . . 34

4.2 Characterization . . . 36

4.3 Asteroid imaging . . . 38

5 Results 41 5.1 Characterization . . . 41

5.1.1 Radiance errors . . . 41

5.1.2 Wavelength errors. . . 44

5.1.3 Chromatic aberration . . . 48

5.2 Asteroid imaging . . . 49

5.2.1 Rock sample . . . 51

5.2.2 Colorchecker . . . 54

5.2.3 Ceramic samples . . . 57

5.3 Summary of results . . . 58

6 Discussion and conclusions 60

References 63

(6)

Chapter I

Introduction

1.1 Background

Asteroids are defined as small, natural, solid bodies orbiting the Sun. They differ from planets and dwarf-planets by their size, and from comets in that they show no dust- or gas cloud flowing from them. [1]

Compositions of asteroids are interesting for planetary sciences, as they can offer information on the formation of the solar system and the effects of space weathering.

Other fields where the compositional information may be of use are planetary defense [2] and asteroid mining [2,3]. For the former, knowing what an asteroid consists of could affect the strategies employed in mitigating the danger it possesses to Earth.

For the latter, the materials present in an asteroid should be analyzed before the start of mining operations.

The most important technique for analyzing asteroid compositions is reflectance spectroscopy. The principle behind this technique is analyzing the spectrum of sunlight reflected by an asteroid and searching for spectral properties characteristic of certain materials. For example, the crystal lattices of different minerals absorb different wavelengths of light. [1]

Asteroids being relatively small in size, the spectral observations of them per- formed with telescopes from Earth or its orbit are “disc-integrated”: they see the asteroid as a uniform disc and cannot resolve how the different spectral signatures are distributed spatially on the asteroid. [4] For a more complete characterization, missions where a spacecraft is maneuvered to fly-by, orbit, or land on an asteroid are needed [1].

(7)

Notable examples of asteroid missions include NEAR–Shoemaker to 433 Eros [5], Hayabusa2 to 162173 Ryugu [6], and OSIRIS–REx to 101955 Bennu [7,8]. The ob- jective of NEAR was to orbit its target and use remote sensing methods, including reflectance spectroscopy, for mapping the surface properties. While the main ob- jectives for both Hayabusa2 and OSIRIS–REx were collecting a sample from an asteroid and returning it to Earth, both also utilized spectroscopic instruments to map the surfaces of their targets.

1.2 Motivation and research questions

In recent years there has been much interest in the CubeSat concept [9], which offers a standardized and affordable platform for small satellites. Plans have been made to also apply CubeSats for asteroid exploration [10]. European Space Agency’s (ESA) Hera mission to the twin asteroid 65803 Didymos is set to be launched in 2024 and to arrive at its target in 2027 [11]. Hera will carry with it two CubeSats, Juventas [12]

and Milani [13], to be deployed near Didymos.

Milani’s primary scientific payload is the ASPECT hyperspectral imager mod- ule, consisting of three instruments: a point spectrometer for short–wave–infrared (SWIR) and spectral imagers for near–infrared (NIR) and visual wavelengths. All three instruments are based on the same technology, a hyperspectral imager (HSI) that uses a Fabry–Perot interferometer (FPI) as an adjustable filter. [10,13] This design is attractive for nanosatellite applications due to its small size. Its space–

worthiness has already been demonstrated on board Aalto–1, an experimental Finnish CubeSat used in Earth Observation (EO) [14].

One of Milani’s tasks is to create a reflectance map of Didymos using the three instruments of ASPECT. Imaging in deep space from a semi-autonomous satellite is fundamentally different from imaging on Earth or imaging the Earth from its orbit. While the mission shares similarities with previous asteroid missions, both the instruments and their platform are different. As such, the same methodology may not be applicable. These considerations lead to the research questions we aim to answer in this study:

• Q1: How to calculate a reflectance map from FPI–HSI measurements of an asteroid?

• Q2: How good is the performance of this reflectance calculation method?

(8)

1.3 Outline

In Chap.2we present the theoretical background of this work, including an overview of hyperspectral imaging, the working principles of both the Fabry–Perot interfer- ometer and hyperspectral imagers based on it, the used radiometric and photometric quantities, and the principles of radiometric calibration in reflectance imaging.

Chapter 3 introduces the performed measurements. This chapter consists of three separate parts, namely descriptions of the used hyperspectral imager, imager characterization measurements, and measurements for modeling asteroid imaging.

Chapter 4 gives details on calculations performed with the gathered data. Its sections include a description of the pre–processing steps for creating hyperspec- tral datasets from raw imager data, and separate sections for calculations with the characterization data and the asteroid imaging modeling data.

The subject of Chap. 5 is the results obtained from the calculations. Again the imager characterization and asteroid imaging modeling results are divided into their own sections, with an additional section summarizing the results.

Finally, Chap. 6will conclude the work by providing discussion on the obtained results and a look into the further research planned in connection with this work.

(9)

Chapter II

Theoretical background

This chapter will offer a short introduction to the theoretical background of the main topics of this work. The concepts and equations presented here will be utilized in coming chapters for describing the performed measurements and calculations and their results. First, Sect. 2.1 gives an overview of the principles and techniques of spectral imaging. Section 2.2 focuses on the Fabry–Perot interferometer, a device used for spectral separation in the type of hyperspectral imager this work is consid- ered with. Section2.3 describes the principles of FPI–based hyperspectral imaging.

Section 2.4 goes over the radiometric and photometric quantities and indices of spectral similarity used in this study. The last Sect. 2.5 presents first the general principles for laboratory calibration of hyperspectral imagers and then relates the measurement calibration methods applied for each hyperspectral image.

2.1 Hyperspectral imaging

In the most basic form of optical imaging, dubbed grayscale or monochrome imaging, all wavelengths of light are recorded into one channel. The result is a map of the spatial dependence of the amount of light reaching the image sensor, weighed with the sensor’s spectral sensitivity. While imaging has its roots in recording light distributions with photosensitive chemicals, since the 1980s, optoelectronic sensors have increasingly displaced film as a detector. [15] As such, in this work we will only consider cameras with electronic sensors.

A common way to include color information in an image is with the use of colored transmission filters. In modern commercial color cameras, this is achieved through superimposing a colored filter array on the camera sensor, only allowing

(10)

certain colors of incoming light to reach certain pixels. A typical filter array mimics the human eye by having three types of filters, ones that allow through red, green, and blue light, respectively. Imaging with a filter array of this sort creates an RGB image with three separate channels that have their own spectral sensitivities – essentially a stack of three grayscale images. RGB imaging may be considered a form of rudimentary spectral imaging, as it records some spectral properties of the imaging target. [15]

One common filter array configuration is known as the Bayer filter, introduced in 1976 by B. E. Bayer [16]. The filter, presented in Fig. 2.1, has twice as much green as red and blue. This design was motivated by the spectral responsivity curve of the human eye, which peaks at the green wavelengths near 550 nm and is lower for both red and blue. The green light thus carries with it most of the luminous information.

[15] Other essentially identical Bayer patterns can be produced by rotating the pattern 90 degrees: the green pixels will stay on the diagonal, and the red and blue in opposite corners.

Color imaging with three filters is often sufficient for visual applications since the human eye also has three types of color–sensitive cells. Imaging with more than three wavelength channels is generally known as multiband– or multispectral imaging.

Unlike RGB imaging, multispectral imaging is often not concerned with recording a target as a human would see it. The wavelengths where the recorded channels reside are not restricted to the wavelengths of visible light: often infrared (IR) or ultraviolet (UV) can offer information not to be found in the visible region. [17]

The term hyperspectral imaging is used to describe imaging techniques where images are captured with tens to hundreds of different wavelengths. The main distinction between hyperspectral and multispectral imaging is the approximately continuous spectrum captured for each spatial pixel, instead of only some bands separated by varying amounts. A hyperspectral image then forms a cube with

R G B G

Figure 2.1: One configuration of the Bayer filter.

(11)

two spatial dimensions and one spectral one: (x, y, λ). [17,18] Figure 2.2 shows an illustration of a hyperspectral datacube.

Hyperspectral images can be captured with several technologies. One of the simplest methods involves using a device akin to a point spectrometer and mov- ing its field of view on the imaging target. Data from the sensor is recorded and constructed into a spatially resolved hyperspectral datacube based on the sensor movement speed. This technique is known as “whiskbroom” scanning. [18] A no- table example of a whiskbroom sensor is the aircraft–based AVIRIS, first flown in 1987 [19] and still in use in 2021 [20].

Another technique, one less prone to errors, is so–called “pushbroom” scanning.

Such scanners could be thought of as a line of point spectrometers, as they can produce a spectral image of a line from the imaging target with one exposure. This line can then be swept over the target to extend the image into another spatial dimension. Compared to whiskbroom scanning, where the imager must visit each spatial pixel individually, pushbroom scanning allows a longer “dwell time” for each pixel by recording a line of them with one exposure. The exposure time for a pixel can then be increased while keeping the total exposure time needed to capture a datacube the same. [18] Pushbroom sensors are common in Earth observation applications, with examples including the satellite–based sensors Hyperion [21], and HyperScout [22].

“Frame–based” or “staring” imagers do not require scanning the imager field of view over the target in either direction but instead capture intact frames. This is

Figure 2.2: Illustration of hyperspectral data. On the left is shown a stack of grayscale images from different wavelength channels, and on the right a continuous spectrum from one spatial pixel. Image courtesy of Ilkka P¨ol¨onen.

(12)

especially useful when imaging targets that are stationary relative to the imager, as no scanning equipment is needed. [18] FPI–HSIs such as the sensor used in the experimental part of this work, and the one on–board the Aalto–1 [14], are frame–

based devices.

Central to the hyperspectral imager design is the method of wavelength selection or wavelength separation: the light consisting of many different wavelengths must be separated into its components to analyze its spectral distribution. The utilized method will set constraints on the use of the sensor. If a prism or a diffraction grating is used for separating the wavelengths, the sensor must be used in either whiskbroom or pushbroom mode. A prism or a grating will spatially separate the different wavelengths on different parts of the sensor array. A line detector is sufficient for whiskbroom operation, while for pushbroom sensors 2D sensor arrays are needed. [18]

For frame–based sensors, the wavelength separation is performed over time while detecting intact frames from a 2D sensor array. Methods of controlling the wave- length of light reaching the sensor in frame imagers include filter wheels, tunable filters, and interferometers. [18] The tunable filters used in spectral imaging include, for example, the Liquid Crystal Tunable Filter (LCTF), which relies on polarizers and electrically tunable birefringent materials, the Acousto–Optical Tunable Filter (AOTF), which creates a tunable diffraction grating by modulating acoustic waves, and the Fabry–Perot interferometer [23]. As the imager considered in this work re- lies on the FPI as its wavelength selection element, operating principles of the FPI are considered in more detail in the next section.

2.2 Fabry–Perot interferometer

The Fabry–Perot interferometer, or etalon, was introduced initially in 1899 by C.

Fabry and A. Perot. The device they proposed consisted of two silvered glass plates placed parallel to each other with an air gap between them. When light enters the cavity through one of the plates, it will reflect multiple times in the cavity.

Each time the light is reflected, a part of it is transmitted – the relative amounts of reflected and transmitted light are related to the reflectanceRand the transmittance T of the silvered surfaces. The total transmittance of the device is affected by the light interfering with itself. If the phases of two transmitted beams are the same, the interference is constructive, and the transmitted light is amplified. If the

(13)

phases of the transmitted beams are opposite, destructive interference will reduce the transmittance to approximately zero when the two waves have the same amplitude.

[24] A schematic view of the FPI is shown in Fig. 2.3.

The phase difference between two transmitted waves of the FPI cavity is deter- mined by the optical path length difference (OPD) produced in the cavity. This difference is related to the wavelength of the light λ, the incidence angle θ, the re- fractive index n of the cavity, and the gap length d (mirror separation). The waves are in the same phase when their optical path difference is an integer multiple of the wavelength, i.e. OPD =mλ, wherem is either zero or a positive integer. The phase of the two waves is opposite when the condition OPD =mλ+ (λ/2) is fulfilled. The optical path difference can be calculated as

OPD = 2dn cos θ, (2.1)

which for a cavity in vacuum (n = 1) simplifies to

OPD = 2d cos θ. (2.2)

This also approximately holds for an air–filled cavity.

The number of interfering waves in an FPI is not two, but instead can be thought of as infinite when the reflectances of the mirror surfaces are high. If the condition OPD = mλ is fulfilled, the waves will all interfere constructively. However, if the OPD differs from this value, among the multitude of waves there exist some that have the opposite phase, leading to destructive interference. Thus, light from a monochromatic point source will be transmitted from an FPI as a series of bright and dark concentric rings due to the varying OPD caused by the uniform angular distribution. [24]

d θ

R, T R, T

Figure 2.3: Schematic view of the FPI in its original configuration.

(14)

The original scheme of Ref. [24] also included a contraption for displacing one of the plates by a few microns, allowing minute adjustments of the gap length d.

The paper suggested two uses for this adjustable gap FPI: spectroscopic measure- ments and accurate measurements of small distances. Both involved observing the transmission fringes of monochromatic light while varying the gap length d. The FPI–HSIs considered in this work also apply an adjustable gap FPI in spectral mea- surements, though in a slightly different fashion.

2.3 Working principle of FPI–based HSIs

While an FPI can be used for spectral imaging purposes as an interferometer by applying the principles of Fourier transform spectroscopy [25], in the spectral imagers related to this work an FPI is utilized as a tunable interference filter.

When collimated light is incident on the FPI, the incidence angle θ is zero. This simplifies the expression for the optical path difference presented in Eq. (2.2) to

OPD = 2d, (2.3)

and consequently the condition for constructive interference to λ= 2d

m. (2.4)

When white light comes to an ideal FPI, only the wavelengths fulfilling the con- dition of constructive interference will be transmitted. The transmission spectrum of a real–world FPI consists of sharp spikes at these wavelengths and low inten- sities between them. Wavelengths of the transmission peaks can be controlled by modulating the gap length d, according to Eq. (2.4).

With accurate control of the gap length, the transmission peaks have known wavelengths. This allows using a variable gap FPI for spectral measurements through observing the transmission intensity for a series of gap values. Typically an addi- tional bandpass filter, dubbed order sorting filter, is included to reduce the number of passed FPI orders to just one. [26] The passing bandwidth of this filter is deter- mined by the Free Spectral Range (FSR), the wavelength separation between two adjacent transmission orders [27,28].

Possibly one of the first uses of an FPI in spectral imaging was by Marinelli et al. in 1999 [28]. The device presented in this paper was an imager for Long–

Wave InfraRed (LWIR), consisting of a scanning system, mirrors for focusing and

(15)

collimating light, a Piezo–actuated variable gap FPI for spectral scanning, and a bandpass filter corresponding to the FSR of the FPI for order sorting.

A more recent development in FPI–HSIs is the series of prototype imagers built by the Technical Research Centre of Finland (VTT) [26,29–32], including the instru- ment used in this study. The main operating principle of these devices is essentially the same as that of Marinelli’s design: incoming light is collimated to the FPI, which passes a narrow peak at a wavelength depending on the air gap length. The number of passed peaks is controlled using an order sorting filter. Light transmitted by the filter configuration is focused onto a camera sensor for recording it. When captur- ing a hyperspectral image, the FPI air gap is varied to a series of known values, recording the sensor readings for each gap length. Knowing the gap lengths for each set of sensor readings, the image created by them can be attributed to a specific wavelength. A conceptual view of the FPI–HSI can be seen in Fig.2.4 [26].

Differences between the design of Marinelli et al. and those created by VTT come from the used components and the resulting specifications. VTT has targeted the Visual and Near InfraRed (VNIR) wavelengths, and the elements for collimating and focusing light are lenses instead of mirrors. The FSR, defined as the separation between two adjacent orders, is also shorter for these shorter wavelength ranges. To combat this, VTT has opted to use RGB sensor arrays. With three types of pixels, the order sorting filter can be selected so that the FPI will transmit three peaks. Of the three, the peak with the longest wavelength will be detected by the red pixels, the next longest with the green, and the shortest with the blue.

Some of VTT’s FPI–HSIs use Piezo–actuated FPIs [26,29,32], while in others

1 2 3 4 5 6

Figure 2.4: Conceptual schematic of the FPI–HSI. The numbered elements are 1:

imaging target, 2: collimation optics, 3: FPI, 4: order sorting filter, 5: focusing optics, 6: camera sensor. [26]

(16)

this component is based on Micro–Electro–Mechanical Systems (MEMS) technology [30,31].

2.4 Radiometry, photometry, and spectral similarity

Radiometry refers to the study of radiation, the propagation of energy through space. While this can include various forms of energy, including nuclear radiation or other energetic particles, in this work we focus solely on electromagnetic radiation.

Radiometry is discussed in Sect. 2.4.1 Photometry deals with the same concepts as radiometry but adapted for our most important detector, the human eye. This system is considered in Sect. 2.4.2. For both radiometry and photometry, we, for the most part, follow the treatment of McCluney from Ref. [33].

As this work focuses on spectral imaging, with all radiometric and photomet- ric quantities we are interested in their spectral distributions: how the radiation is distributed among its wavelengths. Though electromagnetic radiation includes everything from gamma rays to radio waves, we limit our examination to the wave- lengths of visible light between 380 nm and 780 nm and the immediate surroundings of this area.

To assess the performance of the proposed reflection calculation method, two mathematical measures were applied: the Spectral Angle Map (SAM) [34], and the Mean Absolute Error (MAE) [35]. Both are described in Sect. 2.4.3.

2.4.1 Radiometry

The most fundamental radiometric quantity we are concerned with is radiant flux, denoted with Φ and expressed in watts. This measure gives the energy transferred through a surface or a volume in unit time. The spectral radiant flux, distinguished by the subscript λ as Φλ, is defined by dividing the radiant flux with a unit wave- length interval:

Φλ = dΦ

dλ. (2.5)

This quantity tells the radiant flux falling within an infinitesimal wavelength interval dλ at the wavelength λ. The unit of spectral radiant flux is W/nm. [33]

Irradiance, denoted with E, describes the radiant flux through a unit area. The

(17)

unit of irradiance is W/m2, and it can be described mathematically as E = dΦ

dA, (2.6)

where dA is an element of area. The corresponding spectral quantity, spectral irradiance, is given by

Eλ = dE

dλ = d2Φ

dAdλ. (2.7)

The unit of spectral irradiance is then W/(m2 nm). When constraining the ir- radiance of an object, it is important to also specify the spatial point where the irradiance is considered. [33] This is especially true in imaging, where the spatial distributions of light in the scene are the object of interest.

Radiant intensity, I, describes the radiant flux per unit solid angle from or to a point in space in a specific direction. Radiant intensity has the unit W/sr, with sr denoting steradian, the basic unit of solid angles. The equation defining radiant intensity is

I = dΦ

dω, (2.8)

where dω is a differential element of the solid angle in the direction of the flux.

Spectral radiant intensity is defined similarly to spectral irradiance as Iλ = dI

dλ = d2Φ

dωdλ. (2.9)

While for irradiance the location of the point of interest was necessary, for radiant intensity both location and direction must be specified. [33]

Radiance, L, is a measure of the radiant flux density in unit area and unit solid angle. It is designed the unit W/(m2sr). Radiance can be described mathematically as

L= d2Φ

dωdAproj = d2Φ

dωdAcosθ, (2.10)

where dAproj = dAcosθ is the projected area on a surface, dependent of the angle θ between the direction of the flux and the surface normal. The relation between

(18)

radiance and spectral radiance is similar to that between irradiance and spectral irradiance, or radiant intensity and spectral radiant intesity:

Lλ = dL

dλ = d3Φ

dωdAprojdλ (2.11)

The unit of spectral radiance is then W/(m2 sr nm). [33]

For the purposes of imaging, radiance has an interesting quality in its invariance:

if losses of scattering and absorption can be ignored, radiance remains the same when propagating through a medium. [33] The distance between an imager and its target will then make no difference in recorded radiance when the medium between the imager and its target is the vacuum of space. Further, radiance reflected from the target can be recorded accurately if the losses of reflection, absorption, and scattering in the imager are properly characterized.

The quantity of reflectance is a material property, that describes the ratio of reflected and incident light for a surface. For certain surfaces, reflectance can be strongly dependent on geometry, with the direction of incident light affecting the angular distribution of reflected light. In diffuse reflection the reflected light goes equally in all directions, while in specular reflection the reflected light propagates in only one direction determined by the incidence angle. For real–world objects, the reflection is typically a mix of both diffuse and specular. A complete characteriza- tion of reflectance requires determining the Bidirectional Reflectance Distribution Function (BRDF), which takes into account these geometric considerations. [36,37]

In this work we will consider a quantity more readily measured than the BRDF:

the Bidirectional Reflectance Factor (BRF). This factor, denoted with R, is defined by comparing light reflected from a sample to that reflected from an ideal surface.

It can be calculated based on flux or radiance, as R= dΦr

id = dLr

dLid, (2.12)

where Φr and Lr denote the flux and radiance reflected from the sample, and Φid and Lid denote those reflected from an ideal surface, respectively. [37]

In this case, the ideal reflector to which the sample is compared is completely re- flective and diffuse. The requirement of reflectiveness states that the surface reflects all light incident on it, for all wavelengths. A perfectly diffuse reflector reflects light uniformly in all directions. Such a surface obeys Lambert’s cosine law, which states

(19)

that the radiance reflected from the surface is proportional only to the cosine of the incidence angle, measured with respect to the surface normal. Radiance reflected from an ideal Lambertian surface can be expressed as

Lid= Ei

π , (2.13)

whereEiis the incident irradiance. [37] If the incidence angle measured with respect to the surface normal ϕ is not zero, this affects the projected area on the surface, modifying the incident irradiance:

Lid= Eicosϕ

π . (2.14)

Another concept directly related to reflectance is that of albedo. In astronomy, this term is connected to two quantities, the Bond albedo and the geometric albedo.

Bond albedo measures the total portion of light reflected by a body, integrated over wavelengths. Geometric albedo is a spectral quantity that compares the disc–

integrated reflected light from a body to that reflected from an ideal reflector disc of the same size as said body, in a measurement where light comes from directly behind the observer. [1]

While we have discussed quantities using differentials, the measuring techniques used for finding these always return discreet values. For example, the spectral chan- nels of an HSI are always finite in number and width.

2.4.2 Photometry

Photometry is the science of human vision. It is a branch of radiometry, where the previously presented basic quantities of radiant flux, irradiance, radiant intensity, and radiance are adapted for the sensitivity of the human eye. In contrast with radiometry, photometry is then concerned with only a small range of the spectrum of electromagnetic radiation, the visible light at approximately 380 nm to 780 nm.

The range where electromagnetic radiation is “visible” varies between observers, meaning this range is not absolute. [33]

It is to be noted that the word “photometry” is also used for referring to astro- nomical observations made with light [1]. While this work also considers astronomy, here photometry will refer to the system described in the previous paragraph.

The conversion between radiometric and photometric quantities is performed by weighing the radiometric quantities with a sensitivity curve of the human eye. The

(20)

eye has different color sensing cells for brighter and darker illumination conditions, and different sensitivity curves are used depending on which of the cells are active.

[33] Relative sensitivity of the human eye, also called luminous efficiency, is presented in Fig. 2.5 for “photopic” vision active with high levels of light.

The presented sensitivity curve is often called V(λ). To translate any of the four presented radiometric quantities into its photometric correspondent, the following equation can be applied:

Xv = 683 lm/W

∫︂ 780 nm 380 nm

XλV(λ) dλ. (2.15)

Here Xv denotes the photometric quantity (subscript v for “visible”), and Xλ de- notes the corresponding spectral radiometric quality. The constant 683 lm/W is the luminous efficacy, a factor used for conversion between watts and lumens. The four photometric quantities corresponding to the four presented radiometric quantities are presented in Tab.2.1, together with their respective units. [33]

2.4.3 Similarity of spectra

Two measures were used for the similarity of spectra when comparing the results of the tested reflectance calculations to those performed with reference methods.

First of these is the Spectral Angle Map (SAM), defined in 1993 by Kruse et al.

in Ref. [34] for analysis of spectral images. SAM measures the angle between two spectra, by considering them as vectors: a spectrum measured with N wavelength

Figure 2.5: The spectral luminous efficiency of human photopic vision, V(λ).

(21)

Table 2.1: Radiometric and photometric quantities, their symbols, and their units.

[33].

Radiometric Photometric

Quantity Symbol Unit Quantity Symbol Unit

Radiant flux Φ watt (W) Luminous flux Φv lumen (lm)

Irradiance E W/m2 Illuminance Ev lm/m2= lux (lx)

Radiant intensity I W/sr Luminous intensity Iv lm/sr = candela (cd) Radiance L W/(m2 sr) Luminance Lv lm/(m2 sr) = cd/m2

channels can be thought of as a vector in N–dimensional space. A mathematical expression for the spectral angle α is given by

α(t,r) = cos−1

(︃ t·r

|t| |r|

)︃

, (2.16)

wheret denotes a vector for a test spectrum, and r denotes a vector for a reference spectrum. The same can be also be written in a form more fitting for computer implementation by expressing the inner product and the two vector norms through sums, as

α(t,r) = cos−1

N

∑︁

i=1

tiri (︄N

∑︁

i=1

t2i

)︄1/2(︄

N

∑︁

i=1

r2i )︄1/2

, (2.17)

with N denoting the number of wavelength bands. An illustration of the spectral angle in two dimensions is presented in Fig.2.6.

A similar metric for spectral differences was presented in 1997 by Romero et al. in Ref. [38] with the name Goodness–Fitting Coefficient (GFC). The major difference between SAM and GFC is that GFC does not calculate the angle between two vectors, but the cosine of said angle. The inverse cosine present in Eq. (2.16) and Eq. (2.17) is then left out.

Both SAM and GFC ignore the differences in intensity, only measuring the degree of similarity in the shape of two spectra. While this can be a desired property in many cases, in this work these intensity differences are also of interest. Thus, analysis using SAM was complemented with another measure of similarity, the Mean

(22)

Band 1 Band 2

test spectrum

reference spectrum α

Figure 2.6: Illustration of the spectral angle in two dimensions. [34]

Absolute Error (MAE) [35]. Using the previous notation, MAE between two spectra t and r is calculated as

MAE =

N

∑︁

i=1

|ti−ri|

N . (2.18)

2.5 Radiometric calibration

The radiometric calibration of an HSI is for the purposes of this work divided into two parts: device calibration and measurement calibration. With device calibration, we refer to laboratory measurements and the utilization of their data for calibrating the HSI to record spectral radiances. Measurement calibration is used to refer to the calculations applied to each measurement in producing reflectance data. Device calibration is considered in Sect.2.5.1 and measurement calibration in Sect. 2.5.2.

2.5.1 Device calibration

The goal of HSI calibration is to produce a mathematical model, that can be used to transform individual pixel readings into spectral radiances at the image’s aper- ture (at–aperture radiance). The basis for this model comes from characterization measurements, of which there are several types. [37]

Spectral characterization is performed by recording a series of narrow and known spectra, typically produced by a combination of a broad–band light source and a monochromator. By knowing the wavelengths of the measured peaks, spectral responses of the HSI can be adjusted to match them. [37] If the radiances of each peak are known, these measurements can also be used for radiance calibration.

(23)

This is the approach taken in calibration of FPI–HSIs for example in Ref. [26] and Ref. [29].

More typically, radiance calibration is performed using a broad–band light source of known radiance and spectral shape. The output of this source can be measured with the HSI, for example reflected from a reflectance standard or an integrating sphere. The HSI response can then be matched with the known spectral radiance of the light source. Alternatively, the light source may be measured with another, calibrated spectroradiometer, and the HSI response can be matched to the response of this detector. [37]

In–flight calibration methods are also applied in some satellite–based hyperspec- tral sensors. This can mean equipping the spacecraft with a blackbody radiator or a solar diffuser, which may be used to periodically check the performance of the HSI. [37] Examples of these include filament sources on–board OSIRIS–REx [7], and Hyperion [21], with Hyperion also possessing a diffuse reflector for observing solar radiation.

In–flight calibration can also be performed “vicariously”, by observing sites of known reflectance properties [37]. Turning again to the previous examples, both OSIRIS–REx [8], and Hyperion made lunar observations for calibration purposes, with Hyperion also imaging previously characterized sites on Earth.

2.5.2 Measurement calibration

The current produced by many detectors, including the silicon–based Charge–Coupled Device (CCD) and Complementary Metal Oxide Semiconductor (CMOS) technolo- gies, is not zero when no light is incident on them. This dark output or dark current is caused by thermal motion of the electrons, which in an ideal detector would only move when they are excited by incident light. Caused by thermal excitation, the number of dark current electrons is proportional to the sensor’s temperature. [33]

The offset in data caused by this dark current can be mitigated by performing a measurement with no light arriving at the sensor and subtracting the results of that measurement from the actual measurement of a sample. This procedure is typically referred to as dark correction or black–level correction. [39]

In laboratory conditions, the dark current can be measured by simply placing a lens cap on the objective lens of the HSI and recording the sensor readings. For satellites, this method would require equipping the sensor with a mechanical shutter.

(24)

The increase in mass would not be substantial, but still nonzero: especially for nanosatellites weighing a few kilograms and relying on their own propulsion, any increase in mass means more fuel spent for acceleration, and consequently, a shorter lifespan for the mission.

Another consideration comes from introducing an additional moving part, and one that has the function of occluding the camera sensor. In case of a malfunction, this shutter can become fixed into the closed position, rendering the whole HSI sensor useless. Thus, it may be in the best interest of a satellite HSI designer to omit the shutter and rely on other methods for dark correction. For OSIRIS–REx, the dark current was measured by aiming the spectral instrument to a dark area of the sky and recording the sensor readings [7]. A similar approach could be implemented for FPI–HSIs by averaging the readings from dark areas of sky around a target asteroid and using the result as the dark current value. If this method of averaging is used, the dark current of each pixel cannot be subtracted from the same pixel.

For accurate results, the sensor would then have to be rigorously characterized for variation between responses of different pixels.

Another calibration done for each measurement is the reflectance calculation, often referred to as white correction. Both Nicodemus [36] and Manolakis [37] rec- ommend performing reflectance measurements by comparing the light reflected from a target to light reflected from an (approximately) ideal reflectance standard. In practice, this is not a feasible approach for many remote sensing applications, and especially not so for asteroid imaging: it would require bringing a large sample of known reflectance with the spacecraft and placing it on the asteroid.

Instead, we propose to apply a white correction method similar to the one applied for OSIRIS–REx [7] data. While bringing a sufficiently large reference sample in space is not practical, the theoretical radiance reflected from such a sample can be calculated. By comparing the spectral radiance reflected from the asteroid to the spectral radiance reflected from this imaginary white sample, the asteroid’s reflectance spectrum can be evaluated.

Evaluating the radiance reflected from this imaginary white sample can be done based on two factors: spectral irradiance of the Sun and geometry of the measure- ment. Our Sun is a fairly thoroughly characterized light source, with its spectral irradiance measured also outside of Earth’s atmosphere [40]. While this data has been measured near Earth, at a distance from the Sun (heliocentric distance) corre-

(25)

sponding to one Astronomical Unit (AU), it can be easily scaled for other distances using the inverse square law. Denoting the heliocentric distance in astronomical units with ds, and the spectral irradiance at the heliocentric distance of 1 AU with Eλs, the spectral irradiance in a measurement, Eλi, is given by

Eλi = Eλs

d2s . (2.19)

Geometry of the measurement and the white correction calculation is fairly sim- ple. If the heliocentric distance is 1 AU, the solar irradiance can be approximated as collimated light with reasonable accuracy. The incidence angle of light on the imaginary sample is given by the phase angle, a quantity often used in describing astronomical measurements. The phase angle, denoted with ϕ, is the angle Sun – object – observer, and as such, it is equal to the incidence angle. An illustration of the measurement geometry is presented in Fig. 2.7.

Using Eq. (2.14) we can now express the spectral radianceLλidreflected from our imaginary white sample in terms of the incident solar spectral irradiance calculated with Eq. (2.19) as Eλi, and the phase angle:

Lλid = Eλicosϕ

π . (2.20)

By comparing the measured spectral radiance reflected from the sample to this ideal spectral radiance according to Eq. (2.12), we can find the bidirectional reflectance factor for each wavelength channel of the used imager, at each area corresponding to a spatial pixel of the image.

In this study, we will use the described planar model. However, comparing the radiances reflected from an asteroid to those reflected from a planar disc will

Sun

HSI

Asteroid

Imaginary white sample ϕ

ds

Figure 2.7: Measurement geometry for asteroid imaging.

(26)

introduce errors in the obtained reflectance data. For accurate results, the imaginary white sample should have the same geometry as the asteroid to be imaged. This could be achieved by first creating a surface map of the asteroid, for example from LIght Detection And Ranging (LIDAR) measurements. This geometry could then be overlaid on a spectral image of the asteroid, and the radiance reflected from an ideal surface of that geometry could be calculated, providing each spatial pixel of the spectral image with a reference spectrum.

While this surface geometry based white correction seems attractive, it could have several disadvantages of its own. Overlaying the surface model with the spectral image might not be a simple task for an asteroid that may or may not have prominent surface features to use as reference points. Performing white correction in this manner is also prone to be computationally costly: for each pixel the ideally reflected spectral radiance must be calculated based on the surface normal of that point on the asteroid. The required computational power may render the method unsuitable for calculations performed with the onboard computer of a nanosatellite.

(27)

Chapter III

Measurements

The subject of this chapter is the experimental laboratory work related to the thesis.

The chapter begins with a description of the used imager, given in Sect.3.1. The two main sections of the work were the characterization of the imager, and the modeling of asteroid imaging. Measurements for the former are described in Sect.3.2, and for the latter in Sect. 3.3.

3.1 Imager

The hyperspectral imager used in this work was an FPI-based frame imager built by VTT. The imager is part of a line of prototype cameras that have been used in a variety of applications, including UAV-based remote sensing [29,41], skin cancer detection [42] and satellite-based Earth observation [14].

The device’s objective lens was attached by standard C-mount thread, allowing the user to exchange objectives according to the imaging conditions. The imager could also be mounted to a microscope for imaging smaller samples. However, for all measurements described in this work, the objective was a Navitar machine vision lens with 35 mm focal length, model NMV-35M1 [43]. The aperture of the objective was set at its maximum value of F#1.4. The cause for this was the limited amount of light present in the characterization measurements. The same aperture setting was used in other measurements where light was abundant since the characterization results were to be used in analyzing the errors of other measurements.

The wavelength range, selected by an exchangeable band-pass filter used for order sorting, was set to span from 456 nm to 840 nm. The wavelength separating component of the imager was a Piezo-actuated Fabry-Perot interferometer, which

(28)

was used as an adjustable filter [26,27,29]. The imager also included additional lens systems to collimate light arriving to the FPI, and to focus light onto the camera sensor. For collimation, the employed lens was a Schneider 50 F-mount lens [44], and for focusing a Schneider Compact VIS-NIR lens [45].

The imager’s sensor was a machine vision color camera made by Point Grey Research with model code GS3-U3-23S6C-C. In 2016 Point Grey was acquired by FLIR, who now produce a similar camera with identical name and specifications [46].

The sensor array of this camera was an RGB CMOS detector made by Sony with the model number IMX174. Physical size of the sensor was 1/1.2 inches, and the pixel count was 1920×1200. This sensor operates by global electronic shutter, meaning all pixels are exposed simultaneously.

The optical components were positioned relative to each other and kept in their respective places by standard optomechanical components. The imager was encased in a hard plastic shell, with ports for exchanging the band-pass filter and connecting cables. A schematic view of the imager is presented in Fig.3.1.

For the selected wavelength range, the imager provided data in 133 wavelength channels. The wavelength separation between two adjacent channels varied slightly.

The FWHM of the wavelength channels also changed along the wavelength range, with largest values of over 16 nm in the short wavelengths and smallest values of under 12 nm in long wavelengths.

The camera was controlled with a separate computer. For most of the performed

1 2 3 4 5 6 7 8

Figure 3.1: Schematic view of the spectral imager. The numbered parts are 1:

objective, 2: C-mount thread, 3: collimation lens system, 4: PFPI module, 5: short- and longpass filters, 6: plastic case, 7: focusing lens system, 8: camera sensor. Sizes and relative distances of the components are not to scale.

(29)

imaging, the used software was CubeView [30], a tool developed in the Spectral Imaging Laboratory of the University of Jyv¨askyl¨a (JYU). The software is designed to work with several FPI-based imagers. Features of CubeView include providing live view from the sensor to aid in focusing and adjusting parameters, capturing hyperspectral images and saving them, and analysis tools.

CubeView is a graphical user interface built mainly on Python libraries previ- ously developed at JYU, namelycamazing[47],spectracular[48], andfpipy [49].

camazing is a general–purpose machine vision library compatible with GenICam -standard [50] cameras. In CubeView camazingis responsible for connecting to the camera sensor, adjusting its settings, and capturing raw data. spectracularis used for controlling the FPI. It is designed to work with both Micro-Electro-Mechanical Systems (MEMS)-based and Piezo-actuated FPIs built by VTT. Controlling the air gap requires a calibration file, which defines the control voltages fed to the actuators of the FPI.

During a measurement sequence, the FPI gap is varied, and the camera readings are recorded. As the imager is equipped with an RGB sensor, not all wavelength channels need their own exposures. In our measurements, the FPI went through 80 different gap values, resulting in 133 wavelength channels. From the raw sensor data of a measurement,spectracular can build a dataset using xarray [51]. This data structure included the related dark current measurement, the gain and exposure time of the measurement, and the 80 captured frames.

The raw data could be passed to fpipy for further processing such as Bayer interpolation and radiance calculations or saved as–is using the netCDF4-python library [52]. The calculations performed with fpipyare considered with more detail in Chap. 4.

3.2 HSI characterization

The primary characterization method relied on imaging a diffuse surface illuminated by light with a narrow spectrum and comparing the captured readings to those given by a reference detector. Setups similar in their operating principle have been used for calibration of FPI–HSIs in Ref. [30] and Ref. [29]. A schematic view of the used setup is presented in Fig. 3.2.

In addition to the spectral imager, the setup consisted of a halogen light source

(30)

1

2 3

4

5 6

Figure 3.2: Schematic view of the used characterization setup. The numbered de- vices are 1: halogen light source, 2: monochromator, 3: spectral imager, 4: integrat- ing sphere, 5: spectrometer, 6: spectroradiometer.

(Thorlabs OSL2 [53]), a monochromator (Newport-Oriel Cornerstone CS130-USB- 1-MC [54]), an integrating sphere (Labsphere General Purpose [55]), a spectrometer (Hamamatsu C10027-01 [56]), and a spectroradiometer (Konica Minolta CS-2000 [57]). The spectral imager, monochromator, spectrometer, and spectroradiometer were controlled by separate laptops. The light was guided from the light source to the monochromator through an optical fiber bundle. The same method was used to transport light from the integrating sphere to the spectrometer.

At the start of the measurements, the light source was first allowed to stabi- lize for half an hour to decrease fluctuations in its output power and spectrum.

During operation, light was guided from the source to the monochromator. The monochromator passed through a narrow wavelength band: the bandwidth could be adjusted by micrometers at the input and output ports, and the central wavelength was controlled by computer software tuning the orientation of optical components inside the device. The narrow band was passed to an integrating sphere, where it was measured with three devices: the spectrometer, the spectroradiometer, and the HSI.

Main purpose of the spectrometer was to observe the bandwidth of the light to ensure it did not exceed the width of the HSI channels as determined by a previous calibration procedure. The reference detector to which the imager was compared was the spectroradiometer.

(31)

As presented in Fig. 3.2, the spectrometer fiber was placed in an aperture on top of the integrating sphere, and the imager and spectroradiometer were positioned on the opposite sides of the sphere. The imager and spectroradiometer were angled to not include the opposite side aperture in the area from which they recorded the incoming light. The devices were also aimed so that neither would point at the baffle inside the integrating sphere.

Before starting the characterization measurements, several test images were taken to determine the correct positions and settings of the instruments. A deciding factor for the settings was the minimal amount of light passing the monochromator. For the spectroradiometer, the largest possible viewing angle of 1 degree was used. For the HSI, the objective aperture was set at its largest possible value of F#1.4. The exposure time was also increased to its maximum of 4 seconds per frame, and the sensor gain was set at 4.

The series of characterizations measurements was performed over three days. At the beginning of each session, the light source was first allowed to stabilize for half an hour before starting the measurements. During this time, a measurement for dark correction of the hyperspectral images was made: a lens cap was placed at the end of the objective, and the dark current of the sensor was measured using the same exposure time and gain values as for the actual measurements.

For the characterization measurements, light from the halogen source was limited to a narrow wavelength band using the monochromator and then passed to the integrating sphere. The central wavelength of each band was chosen to correspond with one of the HSI wavelength channels as determined by an earlier manufacturer characterization. The spacing between subsequent wavelength channels varied from 1 nm to 4 nm, with smaller values in shorter wavelengths and larger values in longer wavelengths.

The bandwidth of the light was checked with the Hamamatsu spectrometer, and confirmed to be smaller than the FWHM of the wavelength channel of the imager, according to previous characterization. The FWHMs of the measured bands varied between 10 nm and 12 nm, while the FWHMs of the HSI channels were between 12 nm and 16.5 nm. The light reflected from the inner surface of the integrating sphere was next measured with the spectroradiometer and the HSI. The measurement results of all three devices were saved, and the wavelength passed by the monochromator was changed to match the next HSI wavelength channel.

(32)

In total, 115 measurements were made with this procedure. While the HSI could measure 133 wavelength channels, only 115 of those were included in the 350-780 nm wavelength range of the spectroradiometer. The wavelength range where the HSI was characterized was limited to those wavelengths where the operating areas of the HSI and the spectroradiometer overlapped: from 456 nm to 780 nm.

To find how accurately the HSI could determine the wavelength of light, two light sources with narrow and well–known emission peaks were imaged. The HSI was placed close to a flat white reference, and test images were taken to ensure that the reference encompassed the whole sensor. The reference was then illuminated, first using a Helium-Neon laser with a wavelength of 632.8 nm [58]. The laser was positioned several meters away from the imaging target and a high power lens was placed near the laser output port to disperse the beam over the white reference. Test images were taken of the illuminated target to determine the correct HSI parameters.

Overexposure of the sensor was avoided by monitoring the peak signal. After setting the parameters, a dark reference was taken and then the target was imaged.

The other light source used in the wavelength characterization was a high–

pressure mercury vapor lamp [59], which had an emission line at the wavelength of 546.1 nm [60]. The HSI and the white reference positions were kept the same, and the reference was illuminated with the mercury lamp. The lamp housing had rudimentary optics for collimating the light, and no additional lenses were needed to produce suitable imaging conditions. In order to stabilize the output level, the light was allowed to warm for approximately 5 minutes after switching it on. The HSI parameters were then again adjusted and a dark reference was taken before imaging the target.

A schematic view of the setup used in the characterization measurements made with narrow-band light sources can be seen in Fig.3.3. While the schematic suggests that the target was illuminated with both sources at once, separate measurements were made with the two sources.

3.3 Asteroid imaging

The central idea in the second set of measurements was to mimic the conditions of imaging an asteroid as closely as possible in a laboratory environment. The main objective of the measurements was to evaluate the performance of the white

(33)

He-Ne

Hg

Figure 3.3: Schematic view of the narrow–band light source setup.

correction method presented in Sect.2.5.2.

Designing the measurement setup begun by evaluating the imaging conditions as they will be for ESA’s Hera mission to the twin asteroid Didymos [11]. Didymos’

distance from the Sun, its heliocentric distance, varies between 1.0 and 2.3 AU [61].

To evaluate the spectral irradiance incident on Didymos, data for solar irradiance outside of Earth’s atmosphere was utilized. This data was provided by the National Renewable Energy Laboratory (NREL) of the United States [40].

To estimate what heliocentric distance the amount of light present in the labo- ratory measurements corresponded to, the irradiance was converted to illuminance.

The reason for using a photometric quantity was in the available measurement equip- ment: an irradiance meter was not available for the measurements, but illuminance could be quickly measured with the used spectroradiometer, Konica Minolta CS- 2000. The unit conversion was carried out according to Eq. (2.15).

The used solar irradiance data was evaluated near Earth, at a heliocentric dis- tance of 1 AU. To evaluate the irradiance spectrum at other distances, the data was divided by the square of the distance in astronomical units, according to Eq. (2.19).

Next, the data was converted to photometric quantities by weighing it with the spectral sensitivity of the human eye (the V(λ) -curve, presented in Fig. 2.5), and multiplying it with the constant 683 lm/W. The obtained curve was then integrated over the wavelengths to produce illuminance readings in the unit of lx. At a distance of 1 AU, the calculated illuminance was 133 000 lx, and at 2.3 AU, 25 000 lx.

In selecting the light source, three criteria were set: Sun-like spectrum, approx- imately collimated output, and high power. While using the Sun itself could have provided all three, it was not a feasible solution for practical reasons. A laboratory where direct sunlight could be used for illumination was not available, and during

(34)

the measurement period, the weather conditions would not have allowed bringing the equipment outside. The limited amount of daylight would also have set rather strict time constraints for the measurements.

A more practical light source was found in the form of an old slide projector (Leitz Pradovit 153). The spectral distribution of the light produced by the projector was continuous and approximately similar to the shape of the Sun’s spectrum. The spectrum produced by the projector can be seen in Fig. 3.4, as measured by the Konica Minolta CS-2000 spectroradiometer. The same figure also shows the solar irradiance spectrum for comparison. Both spectra were divided by their largest values to provide normalized results for comparison.

Internal optics of the projector consisted of a light source, a reflector, a condenser lens, and an adjustable focusing lens. The light source was most likely a halogen lamp. However, the shape of the spectrum and the peak wavelength would suggest that the projector also included a solution for reducing the amount of red and infrared light. The resulting spectral distribution is closer to that of the Sun than the spectra of conventional halogen sources.

The focusing lens of the projector was adjusted for best possible collimation.

The resulting distribution of light appeared flat when projected onto a wall. Size of the illuminated area increased when moving the projector further away, suggest- ing that the collimation was not ideal. However, light near the optical axis was deemed approximately collimated, and the imperfect collimation allowed controlling the amount of light incident on a sample by changing the distance between sample

Figure 3.4: Normalized irradiance spectra of the Sun and the slide projector.

(35)

and projector.

The projector was placed on a movable stage, and a sample holder was set in front of it. The sample holder, and the area behind and in front of it, was covered with black cloth to reduce background reflections. The HSI was attached to a tripod and positioned approximately 35 cm away from the sample holder.

While Chap. 2 describes evaluating the incident spectral radiance through cal- culations, in the laboratory experiments this quantity was measured using the spec- troradiometer. The optical elements of the projector, used in collecting the light and modifying its spatial distribution, would have caused significant difficulty for calculating the incident radiance from the solid angle subtended by the light source and the phase angle of the incident light.

The spectroradiometer was placed behind the HSI, approximately two meters away from it. Heights of the middle points of the projector lens, the sample holder, the HSI objective, and the spectroradiometer objective were measured and adjusted until they matched. Geometry of the measurement setup seen from above is pre- sented in Fig. 3.5(a). The phase angle, denoted in the figure with ϕ, was approxi- mately 30. Figure3.5(b)shows a photograph taken from behind the projector. The spectroradiometer, placed approximately 2 meters behind the HSI, is not visible in this photograph.

The described setup was used to image five different samples: a rock to stand in for an asteroid, two colorchecker cards, and two gray ceramic reflectance standards with a matte surface. Additionally, a white sample was used to evaluate the amount and spectral distribution of incident light.

The rock sample was measured with eight different light levels, which were set by moving the projector. At the beginning of these measurements, the projector’s distance to the sample, denoted in Fig. 3.5(a) with dp, was measured using a tape measure. Next, a white sample was placed on the sample stage, so that its nor- mal pointed toward the projector. The HSI was removed from its tripod, and the white sample was measured with the spectroradiometer. The results of this mea- surement were used to calculate the illuminance incident on the sample, from which the approximately equivalent heliocentric distance was then evaluated.

The spectroradiometer software automatically calculated the value of luminance from each measurement, with a unit of cd/m2, which can be converted to relate to luxes according to the relations given in Tab. 2.1: cd/m2 = lm/(m2 sr) = lx/sr.

(36)

1 2

3 dHSI dp ϕ

4

(a) Schematic view of the setup. 1 is the projector, 2 is the sample stage, 3 is the HSI, and 4 is the spectroradiometer.

(b) Photograph of the setup showing the projector, the sample stage, and the HSI.

Figure 3.5: A schematic view and a photograph of the measurement setup for sim- ulating asteroid imaging.

Approximating the white sample as an ideal diffuse (Lambertian) surface, it was assumed it reflected light uniformly in all directions on its illuminated side. The luminance was converted to illuminance by multiplying it with π, the result of in- tegrating Lambert’s cosine law over half-space. The projector–sample –distances, their measured illuminances, and the equivalent heliocentric distances are presented in Tab. 3.1. This table also shows the total exposure time ttot for a hyperspec- tral image, calculated as a sum of 80 exposures. In truth, the measurement time was slightly longer, as the FPI-based imager must adjust the air gap between each exposure.

When imaging an un–stationary target with a frame-based imager, one concern is the imager’s temporal scanning: the hyperspectral image is built over time, possibly

(37)

Table 3.1: Projector-sample -distances dp, luminance values LV, illuminance values IV, corresponding approximate heliocentric distances ds, and total exposure times ttot for measurements of the rock sample.

dp [cm] LV [lm/(m2 sr)] IV [lx] ds [AU] ttot [s]

51 9 200 29 000 2.1 2.8

62 6 300 20 000 2.6 4.4

72 4 800 15 000 3.0 6.0

84 3 600 11 000 3.5 7.6

98 2 800 8 800 3.9 10.0

113 2 100 6 600 4.5 13.2

125 1 800 5 700 4.8 16.4

149 1 300 4 100 5.7 22.8

causing offset between frames. If the target asteroid of the Hera mission, Didymos, were to be imaged with an imager similar to the one used in this study, the effects of this could possibly be ignored. The rotation period of Didymos is approximately 2.26 hours [61]. The shortest total exposure time used in our laboratory measurements, 2.8 seconds, is only 0.03% of the rotation period. The offset between frames caused by Didymos’ rotation would then be minuscule.

The heliocentric distance corresponding to this measurement was 2.1 AU, which is closer to Didymos’s maximum heliocentric distance of 2.3 AU. In actual measure- ments, the illuminance would be higher, further shortening the required exposure time. The preliminary mission plan for Milani [13] states the baseline plan as start- ing Milani’s operations when the distance from the Sun is 1.5 AU and continuing for six months until reaching 2.2 AU. Another factor affecting the relative movement of Didymos’ surface and the satellite used for imaging it is the orbit of this satellite around Didymos. The effect of this movement caused for imaging should be taken into account when designing the orbit.

After evaluating the incident illuminance, the white sample was rotated to point its normal toward the spectroradiometer, and the reflected spectral radiance was measured again. The result of this measurement was used as the light reflected from an ideal sample in the proposed white correction method. The reason for measuring the light reflected from a white sample, as opposed to measuring the light

(38)

source directly and evaluating the reflected light computationally, was the limited capacity of the spectroradiometer: the light source would have been too bright. Even measuring reflected light required using the device’s narrowest acceptance angle setting of 0.1.

After completing the two measurements using the spectroradiometer, the HSI was placed back on its tripod in between the sample and the spectroradiometer.

The HSI exposure time was adjusted until none of the 80 frames was saturated. To keep the signal–to–noise ratio as high as possible, the exposure time was set to keep the pixel readings close to their maximum value. For the same reason, the internal gain of the sensor was kept low, with a value of 2 out of a maximum of 30.

After adjusting the camera parameters, a lens cap was placed on the objective, and the sensor’s dark current was measured. The cap was then removed, and the white reference was measured. Finally, the white reference was replaced with the sample, and another hyperspectral image was taken.

This measurement sequence was identical for each measurement of the rock sam- ple. It was also used for the two ceramic samples, with the exception of only mea- suring with one light level of 22 klx. For measurements of the colorchecker cards, the illuminance was calculated to be 24 klx. In addition to spectral imaging, the two colored patches visible in each hyperspectral image were also measured with the spectroradiometer.

Viittaukset

LIITTYVÄT TIEDOSTOT

Mansikan kauppakestävyyden parantaminen -tutkimushankkeessa kesän 1995 kokeissa erot jäähdytettyjen ja jäähdyttämättömien mansikoiden vaurioitumisessa kuljetusta

Tornin värähtelyt ovat kasvaneet jäätyneessä tilanteessa sekä ominaistaajuudella että 1P- taajuudella erittäin voimakkaiksi 1P muutos aiheutunee roottorin massaepätasapainosta,

Työn merkityksellisyyden rakentamista ohjaa moraalinen kehys; se auttaa ihmistä valitsemaan asioita, joihin hän sitoutuu. Yksilön moraaliseen kehyk- seen voi kytkeytyä

Palonen korostaa, että kyse on analyyttisesta erottelusta: itse tutkimus ei etene näin suoraviivaisesti.. Lisäksi kirjassa on yleisempää pohdintaa lukemises- ta,

The part of the bicycle most often mentioned in connec- tion to the body and bodily sensations in the written memory material is the top tube.. This marked difference could

The new European Border and Coast Guard com- prises the European Border and Coast Guard Agency, namely Frontex, and all the national border control authorities in the member

The problem is that the popu- lar mandate to continue the great power politics will seriously limit Russia’s foreign policy choices after the elections. This implies that the

The US and the European Union feature in multiple roles. Both are identified as responsible for “creating a chronic seat of instability in Eu- rope and in the immediate vicinity