• Ei tuloksia

3.1 Optical setup

One of the main requirements from this optical setup is that it must work in near-field.

Previously, the common approach for lensless imaging worked in far-field where Fourier optics can be applied. The transfer function of the wavefront propagation is modelled using angular spectrum method which is detailed in Sec. 3.2.

The optical setup is simple. It consists of a laser illuminator, a specimen (object), a binary phase modulation mask and a camera sensor (Figure 3.1). The pattern on the phase mask is randomized.

Figure 3.1.Illustration of the optical setup. [17, Fig. 1]

The wavelength of the illuminating laser is 532 nm and the pixel size of the mask is 1.73 µm which is half of the camera sensor’s. A cut of the binary phase mask is shown in Figure 3.2. The height difference between the mask pixels is 500 nm with a 10% uniform deviation due to dullness errors in the mask surface. [17].

Figure 3.2. Cut of the phase mask [17, Fig. 7]

3.2 Image formation

Since the optical setup has relatively small distances between the components, the diffrac-tions are considered in the near-field. The propagation of the wavefronts is modelled with angular spectrum (AS) method [8, Ch. 1]. This allows predictions in both forward and backward propagation. The wavefront propagation and its transfer function are defined as:

u(x, y, d) = F−1{H(fx, fy, d)·F{u(x, y,0)}} (3.1)

H(fx, fy, d) =

⎩ exp[︂

iλ d√︂

1−λ2(fx2+fy2)]︂

, fx2+fy2λ12,

0, otherwise.

(3.2)

Function u(x, y, d) is the result of free space propagation of u(x, y,0) where x and y describe the pixel location. The AS operatorH(fx, fy, d), is defined by distanced, spatial frequencies fx and fy and wavelength λ. Some of the support constraints used in the algorithm are non-negativity and window size. The circular window in the center of image determines how the image is cropped.

2. Until a criterion is met, repeat:

(a) Forward propagation: Propagate the object wavefront (3.1) to the mask plane with distanced1, add the mask, then propagate it to the sensor plane with distanced2.

(b) Wavefront update: Replace wavefront amplitude with the amplitude of the illumination source

(c) Backward propagation: Propagate the updated wavefront back to the mask plane with distanced2and subtract mask. Then propagate the wavefront back from the mask plane to the object plane with distanced1

(d) Filters:DRUNet, BM3D filter, Apodization 3. Output [17]

The criterion for stopping the iteration can be an error measure such as RRMSE (Relative Root Mean Square Error).

3.3 Simulations and movement detection

Before the physical experiments, the first videos of bacteria were created artificially using a still image from the MATLAB sample image set. The image was translated laterally frame by frame in MATLAB to create a simulation of movement. The first iteration of the tracking algorithm was able to detect and track moving parts of the image distinguished from the background. This was a proof of concept which encouraged to take this ap-proach. At this point the major disadvantage was that only moving objects were detected.

By modifying the detection part of the algorithm we were able to fix this.

3.4 Object detection and tracking

The object tracking algorithm uses ORB key point detector and blob detection. The idea is to find groups of ORB points which correlate with each other. Morphological operations were applied to the ORB image after which blob detection was able to find the supposed objects from the image. Part of the noise was filtered out from the binary image but some were left. ORB detects details when there is not too much noise, otherwise there are false positives. Moreover, the algorithm tries to predict the objects’ movement statistically using Kalman filters.

In Figure 3.3 there are shown screen captures of three windows running the tracking

program. The red rectangle in Figure 3.3c is the region of interest (ROI) where the ORB key points are searched. Figure 3.3a shows what the found ORB points are. Figure 3.3b shows target 1, the detected object bordered with the yellow rectangle. We can draw the rectangles on the original video as shown in Figure 3.3c to illustrate the size and location of the object.

(a)ORB key points. (b)Morphological operations and blob de-tection.

(c)The detected bacteria from a video.

Figure 3.3. Still image of the windows.

Sometimes the object is not perfectly inside the yellow rectangle and this is due to errors in the key points. The most common source of error is noise or low contrast.

3.5 Autofocus

Finding the correct focus level, i.e. object-mask distance is needed for reconstructing clear images of the bacteria. Autofocus is an algorithm that optimizes the distance

ac-image is the same as used in the moving object simulations.

(a)Focused phase image of bacteria (b)Defocused phase image of bacteria Figure 3.4.The phase image in and out of focus

To see the feasibility of the autofocus combined with the tracking part of the software, first only six focus measures were compared to each other: CURV [18], GLVA [19], LAPV [20], LAPD [21], sum of wavelets [22], and WAVR [23]. The initial comparison showed that GLVA [19] performed well compared to the other focus measures. However, after further experimenting, none of the above were actually sufficient focus measures. The more in-depth comparison results are shown in Chapter 4.

Even though the comparison was not very informative at this point, it became apparent that even slightly blurry objects could be tracked passably. This became the leading idea for autofocus since focusing between frames can be expensive when using a brute-force method such as this.

3.5.1 Deep learning

Deep learning (DL) is a possible solution for autofocusing and a well trained network could be able to do it efficiently. In 2021, Wang et al. proposed an efficient DL pipeline for the AF problem. This solution outperforms traditional contrast maximization in terms of efficiency and is able to produce all-in-focus static images or video. [24]