• Ei tuloksia

Benefits of aspherical optics for spherical optics devices with optical resolution

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Benefits of aspherical optics for spherical optics devices with optical resolution"

Copied!
66
0
0

Kokoteksti

(1)

Benefits of aspherical optics for spherical optics devices with

optical resolution Aleksi Leinonen

MSc Thesis June 2018

Department of Physics and Mathematics

University of Eastern Finland

(2)

Aleksi Leinonen Benefits of aspherical optics for spherical optics devices with optical resolution, 61 pages

University of Eastern Finland

Master’s Degree Programme in Photonics Supervisors Prof. Jari Turunen

MSc. Jyrki Pikkarainen

Abstract

In this study a spherical objective is used as a base for an aspherical design. The purpose is to evaluate the viability of replacing spherical surfaces with aspherical and study how the optical properties and price of the system will be affected. The cost of the system is allowed to increase if the optical performance is superior, or the weight and size of system has decreased.

The best location for an asphere was deemed to be the last surface in order to affect the aberrations as much as possible. Aspherical surfaces proved to be superior in image quality to spherical surfaces as expected. By replacing one spherical surface with an asphere the optical properties of the system improved and it was possible to remove one lens as well as retain most of the systems performance. Also, the weight of the system decreased by ≈ 16.8% The removal of two lenses was tried but the optical power of a system or alternatively dispersion was ruined in the process.

The cost of an aspherical surface was approximated to be three to five times higher than spherical surface. This translates into two to three times higher lens cost if only one surface of the lens is aspherical. A rough estimation for the price of a designed aspherical system was made and it proved to be 60% more expensive than the original system which is too much considering that the systems performance was not improved. Overall aspherical lenses are more expensive than their spherical counterparts due to their more complex surface profile but they provide better per- formance and compensation for aberrations.

Keywords: 110.0110 Imaging systems; 220.1000 Aberration compensation; 220.1250 Aspherics; 220.3620 Lens system design [1].

(3)

Preface

I would first like to thank my supervisors Jyrki Pikkarainen and Jari Turunen for their invaluable expertise and comments in the design and writing process. I would also like to thank Senop Oy, Millog Oy, and their personnel for the possibility to work with them and learn the whole process of optical system design from the preliminary design stage all the way into a produced model.

The motivation for this thesis was to learn optical design in practice and study how the complete process of optical system design progresses all the way from pre- liminary design into a manufactured system. Also, the effect of aspheres and their relative cost to spherical lenses proved to be an intresting topic.

The reader should be aware that most parameters of the designs and some parts of other information are not given due to non-disclosure agreements with both Senop Oy and Millog Oy.

Joensuu, the 6th of June 2018 Aleksi Leinonen

(4)

Contents

1 Introduction 1

2 Theory 3

2.1 Basic optical design theory . . . 3

2.1.1 Ray and wavefront tracing . . . 3

2.1.2 Pupils, stops and vignetting . . . 5

2.1.3 Light collection ability of a system . . . 7

2.1.4 Depth of focus and field . . . 9

2.1.5 Resolution of an optical system . . . 9

2.2 Aspherical surfaces . . . 11

2.3 Aberrations . . . 12

2.3.1 Wavefront error . . . 12

2.3.2 First order aberrations . . . 14

2.3.3 Spherical aberrations . . . 14

2.3.4 Coma . . . 16

2.3.5 Astigmatism and Petzval curvature of field . . . 16

2.3.6 Distortion . . . 18

2.3.7 Chromatic aberration and dispersion in optical material. . . . 19

2.4 Third order aberration theory for spherical and aspherical surfaces . . 22

2.5 Modulation Transfer Function . . . 26

2.6 Operands and the Merit function . . . 28

2.7 Tolerancing . . . 29

(5)

3 Design and manufacturing considerations 31

3.1 Spherical Double-Gauss lens . . . 31

3.2 Aspherical designs . . . 34

3.3 Manufacturing process of an aspherical surface . . . 35

3.4 The financial aspect of lens production . . . 37

4 Design process and results 40 4.1 Preliminary analysis of aspherizing one surface . . . 40

4.2 Removal of one lens . . . 43

4.2.1 The design process . . . 43

4.2.2 Optical properties and performance . . . 45

4.2.3 Tolerances . . . 48

4.2.4 The feasability of production. . . 50

4.3 Removal of two lenses . . . 51

5 Conclusions 53 References 54 Appendices 56 A Extra figures for aspherization of L1 and L7. . . 56

B Extra figures for design with one removed lens . . . 60

(6)

Chapter I

Introduction

The advancements in computing power and new production methods have made the design and manufacturing of aspherical lenses viable in the industy. Aspheri- cal lenses are widely available and used for example in high-resolution cameras for smart phones [2], High-power Laser compression systems [3], multichannel imaging systems [4], Terahertz imaging [5], and camera objectives. Aspherical lenses have multiple advantages over traditional spherical lenses including better optical reso- lution, reduction of optical aberrations, and the possibility to replace a significant amount of spherical surfaces with a single asphere thus reducing the size and weight of the system [6–8].

Although aspherical surfaces seem superior to spherical surfaces by comparing the optical properties this is not always the case. When comparing two optical surfaces the cost of manufacturing must be taken into account for aspheric and spherical surfaces respectively. Aspherical surfaces tend to have higher production costs, which is directly related to their more complex surface profile. Thus the symmetry of a spherical surfaces gives a clear advantage in manufacturing over an asphere. Also, the testing of surfaces benefits from the symmetry of a surface profile. Although simple aspherical surfaces can be manufactured using traditional techniques [9], a more cost-effective way to produce aspheres is required to replace spherical designs [8].

In this study a spherical Double-Gauss lens is used as a base for an aspherical lens design. One or more of the surfaces of the design will be replaced with an asphere to improve the optical properties of the lens. Multiple aspherical designs based on this lens will be designed and an analysis on their price and performance

(7)

will be made. This performance and cost will be compared to the original spherical lens and the viability of replacing spherical surfaces with an aspherical surface will be analyzed from a financial standpoint.

In Chapter 2 the theory behind optical design and aspherical surfaces will be presented as well as the effect of aberrations to the final image will be discussed.

Also, some properties of optical design software are presented. The optical system to be used as a base for the aspherical design is presented in Chapter 3 and the prequisities for the aspherical designs will be discussed. The manufacturing methods and how the cost of a lens is determined will also be discussed. The aspherical design process and the results will be discussed and presented in Chapter 4. Finally, the conclusions will be given in Chapter 5.

(8)

Chapter II

Theory

In this chapter the basic theories behind optical design are presented and discussed.

The main emphasis is on aberrations and how they affect the final image. Also, some parameters and techniques widely used in optical design softwares are discussed.

2.1 Basic optical design theory

This section highlights the basic principles of optical design without the mathemat- ical expressions for ray and wavefront tracing. These mathematical expressions can easily be found from literature and thus are exluded from the discussion.

2.1.1 Ray and wavefront tracing

Designing a lens system is an undertaking which takes time and a lot of effort to accomplish without the usage of modern computers. The design process is based on tracing invidual rays of light, or complete wavefronts accross the system. The Snell’s law of refraction is an important tool in this undertaking. The mathematics in optical design are based on this fundamental law and basic geometrical relations on known properties of a system. [10,11]

The fundamental point of tracing rays is to find the focus point of the system.

The focus is commonly located at the focal point of the system. System is in focus when the rays originating from the object form an image on the focal point. In ray tracing this means the point where the traced rays coincide at a single point. This is commonly defined as the image plane since the image is formed on this plane if there

(9)

are no aberrations1 in the system. Tracing of one ray throught an optical system is presented in Fig. 2.1. [10]

Lens 1 Lens 2

F

Figure 2.1: Ray traced through an optical system. F presents the focal point of the system.

The location of an image, its size, and the orientation of image can easily be calculated by using cardinal points located on the optical axis of a system assuming the system is well corrected, surrounded by isotropic medium, and is symmetrical around the optical axis. These cardinal points are the two focal points, two nodal points, and two principal points for a single lens shown in Fig.2.2. The focal points

Optical system

F P P F

Figure 2.2: Focal F and principal P points of a system. Principal surfaces are drawn with dotted lines.

give the focal lengths of the system which are defined as the distance between the lens surface and focal point. These lengths are called front focal length (ffl) or back focal lenght (bfl) respectively. The principal points give the two principal planes of

1Defects in image quality. Discussed in section2.3.

(10)

the system in which all of the refraction can be assumed to happen. These principal planes are defined by tracing rays entering and emerging from the system and finding the point of intersection. In ideal case the principal surfaces are spherical. [10]

One of the main characteristics of an optical system is its effective focal length (efl) which is defined as the distance between the principal and the focal point of the system. The last of the cardinal points are the nodal points which are the points where a ray propagating toward the first focal point comes out of the system like it originated from the second nodal point. The principal points of a system can coincide with the nodal points. This is the case usually when the optical system is surrounded by air. [10]

Eventhough the cardinal points ease the tracing of rays throught the system, the tracing process is still time consuming. The amount of rays needed to be traced increases dremendously when the optical system becomes more complex by adding more surfaces which require additional calculations for each ray. One way to reduce the workload in ray tracing is the usage of paraxial approximation. With this ap- proximation the rays are assumed to be near the optical axis of the system which allows us to get rid of sine functions: sinθ ≈ θ. Paraxial approximation is essen- tial in optical design and it can give relatively good approximation even with large angles. [10]

The real lens design progress would first include tracing rays until sufficient performance is found. Ray tracing is the fundamental first step because rays are only an approximation of a real wavefront. Wavefronts need to be traced for real optical systems since the amplitude and phase of light is an important consideration due to the presence of diffraction and interference in real systems. Diffraction and interference affect image quality tremendously and must be included in the design process. This obviously presents more challenges in calculations. [10]

2.1.2 Pupils, stops and vignetting

An important part of the design of optical systems is the placement of apertures, stops, and the definition of entrance and exit pupils. These are used to control the flow of light inside the system. [10]

Apertures are the stops which control the propagation of light in the system. An aperture stop is the stop that defines how much illumination the image plane will receive. It limits the diameter of the cone of energy that the system will accept from

(11)

various axial points. Aperture stop is usually the first stop that optical system will have. [10]

Entrance and exit pupils are defined by the aperture stop. Entrance pupil is the image of the aperture stop projected to the object side viewed from an axial point in object plane as presented in Fig. 2.3. The exit pupil is defined similarly but the image of the aperture stop is viewed from the image plane. These pupils

Lens 1 Lens 2 Lens 3

Aperture stop Entrance pupil

Exit pupil

Object

Figure 2.3: Illustration of pupils and aperture stop. Dashed line presents the principal ray and the other two rays are called marginal rays.

can be located by tracing the principal ray2 throught the optical system. When the principal ray intersects the optical axis of the system before the first lens the entrance pupil is located. The exit pupil can be located after the last lens by utilizing the same method. The size of the image can be determined by tracing the marginal rays3 throught the system, as presented in Fig. 2.3. The image formed on these two points defines the sizes of entrance and exit pupils respectively. [10,11]

In addition to aperture stop, an optical system may have field stops which limit the angular extent of the imaged object or baffles to prevent stray light reflected inside the system from reaching the image plane. The field stop thus controls the field of view of the optical system. [10]

There also exists stops that control stray light which either enters the system or is created by the reflections within the system. These stops that control the stray light created within the system are called baffles. Baffles can be just simple pieces

2Principal ray is the ray which goes throught the center of aperture stop, also called the chief ray [10,11].

3Marginal ray is a ray which propagates from the center of the object throught the system, hitting the edges of the aperture stop [10].

(12)

of some material with a coating. Their purpose is to absorb the stray radiation or otherwise prevent it from reaching the image plane. Stray light entering the system from outside is negated with glare stops, and if the incident radiation is in infrared the stop is called a cold stop. Glare stops4 are located at the image of the aperture stop. [10]

The combination of lenses and stops prevents some of the light entering the system from reaching the image plane. Those light rays that enter the optical system but do not propagate to the image plane are called vignetted rays, presented in Fig. 2.4. The ratio of vignetted rays is thus the amount of rays hitting the image plane divided by the total amount of rays entering the optical system. This means that the amount of vignetted rays is also an indication of the light collecting capability of the optical system in question. [10,11]

Lens 1 Lens 2

Aperture stop Object

Image

Image without aperture stop

Figure 2.4: Illustration of vignetted rays, where dotted red lines present the vi- gnetted rays.

2.1.3 Light collection ability of a system

In order to produce images of an object the light should be able to traverse throught the optical system. The amount of light that passes throught the system is also an important consideration in optical design, as the amount of light collected by the imaging system from a small source region is directly proportional to the area of

4Can also be called a Lyot stop [10].

(13)

the clear aperture. The power per illumination area at the image plane is inversely proportional to the square of the objects image area. [10,11]

The purpose of the optical system to be designed is essential in defining the amount of light that needs to pass throught the system. For example, if the system is imaging light sources such as the sun the amount of light can saturate the detector producing images of poor quality. Also, if the system is designed to be used in low lighting conditions the amount of light collected should be high to compensate.

The light gathering ability of a system from a clear aperture is presented with thef-number. It is defined as the ratio of focal lengthf of a system to the diameter of the clear aperture size of the lens D:

f /# = f

D, (2.1)

f /# in Eq. (2.1) should be unterstood as a single symbol in which # is customarily thef-number of the system. For example, a system withf-number of 4 is presented asf /4. The numerical value off-number gives the light gathering ability of a system.

Intrestingly, the system with a small f-number has high light collection capability and inversely a system with high f-number has low light collection capability. The f-number can be also referred to as the relative aperture or the “speed”5 of a system.

[10]

Light collection capability of a system can also be expressed with numerical aperture

NA =nsinU, (2.2)

wheren is the refractive index where the image lies, andU is the half-angle of the cone of illumination. It is evident that f-number and the numerical aperture both express the same characteristic of a system. The difference in using these definitions is that numerical aperture is used for systems with finite conjugates andf-number is applied to systems to be used with distant objects. These two terms can be related if the system is aplanatic6 and the object distances are infinite: [10]

f /# = 1

2NA. (2.3)

5The term speed comes from photography where a system with high speed has high light collection ability(lowf-number) and low speed system has low light collection ability. Speed refers to the exposure time needed to image an object. [10]

6Corrected for coma and spherical aberration. [10]

(14)

For systems working in finite conjugates the numerical aperture can be different on the object NA and image NA sides but the ratio of these terms must equal the absolute magnificationm of the system: [10]

m= NA

NA. (2.4)

The f-number is also related with the magnification m of the system for systems working in finite conjugates: [10]

f/# = (1−m)f /#, (2.5)

wheref/# is thef-number on the image side, know as the workingf-number, and f /# is the f- number presented in Eq. (2.1), known as the infinite f-number. [10]

2.1.4 Depth of focus and field

Depth of focus is an imortant consept in optical design. It defines the acceptable blur caused by defocusing7 in an image. It was originally an idea presented by the photographers and it is based on the consept that the depth of focus should remain smaller than the resolution of a pixel8 in order for the blur to remain unnoticable. [10]

The acceptable amount of blur depends on the purpose of the imaging system and thus there is no universal definition for it. Depth of focus is defined as the longitudal distance from the image plane on which the diameter of blur is acceptable, presented in Fig. 2.5. If the acceptable blur is defined by the angular extent it is called the depth of field. [10]

It should be noted that this presentation of the depth of focus is an ideal case.

In real cases the diameters for acceptable blur are not identical on both sides of the image plane, aberrations should be taken into account, and the image source is not a point source. These factors affect the depth of focus and should be taken into account in the design progress. [10]

2.1.5 Resolution of an optical system

One of the most important parameters of an optical system is the resolution of the image produced. The resolution of the image has a limit established by the

7Defocusing will be discussed in section2.3.2.

8Used to be the size of a silver film grain before the invention of CCD detectors [10].

(15)

B B

L

Image plane

Figure 2.5: Depth of focus in a system. The diameter of the acceptable blur is B and L is the longitudal displacement allowed from the image plane.

diffraction in the finite aperture of the optical system. This is valid for all optical systems, even with systems with the most ideal performance. Every point of object can be considered to be imaged as an Airy disk. When the distance between Airy discs is close the diffraction patterns will overlap. These two points are said to be resolved when these two Airy discs can clearly be separated from one another. [10]

The resolution is usually defined as the amount of pixels on a detector of an imaging system but this is not always the case. The size of the pixels used and the resolution limiting spot size Z, produced by the diffraction in the system, must be taken into account. The most used diffraction limited spot size is called the Rayleigh’s criterion and is defined as:

Z(λ) = 0.61λ

NA = 1.22λ(f /#), (2.6)

where λ is the wavelength of incident light, NA is the numerical aperture of the system, and f /# is the f-number of the system. The factor 1.22 is caused by the circular aperture. Rayleigh’s criterion occurs when the maximum of first pattern coincides with the first minimum of a second pattern, and two maximas can clearly be distinquished from the combined pattern. This condition is mostly used for microscopes and other objectives operating near the diffraction limit. [10]

Resolution for telescopes and other systems at the image plane is resolved by using the NA of the image cone and for the resolution at object plane the NA of the object cone is used. For the evaluation of performance at long object distances the

(16)

angular separation α of object points is used:

α = 1.22λ

ω , (2.7)

where α is given in radians and ω is the diameter of aperture. It should be noted that angular separation is once more the limit of the resolution and not the actual resolution given by the system. [10]

The actual resolution given by the system is given by the amount of resolvable pixels on the detector. This means the resolution of the actual image is given by the combination of available pixels on a detector, and the spot size provided by the system.

2.2 Aspherical surfaces

Spherical surfaces are commonly only designed using the radii R of a sphere. The definition for aspheric surfaces also uses the radiiR of the surface but in a different way. Aspheric surfaces have been defined by using a surface profile: [12]

Z(r) = Cr2

1 +√

1−(1 +k)C2r2 +

N

i=1

αir2i, (2.8)

whereC = 1/R is the reciprocal of vertex radiusR,k is the conic constant, N is the number of orders used, r is the radial distance from the optical axis, and αi is the constant used for ith order variable [7]. If the aspheric constants αir2i are equal to zero the resulting surface is a conic depending mainly on the conic constantk. When k = 0 the surface is a sphere, ifk > −1 the surface is an ellipse,k=−1 generates a parabola, andk <−1 a hyperbola [13]. Fig. 2.6 presents how the surface described in Eq. (2.8) is formed in (x, y, z) coordinate system.

In Zemax it is possible to use aspheric terms up to 480th order by using Extended Asphere variable [12] but increasing the order of variables also adds to the complexity of the surface profile and manufacturing possibly increasing the price of the lens or making it impossible to manufacture. The effect of the complexity of an aspherical lens and how it affects the manufacturing costs will be analyzed in Chapters 3and 4.

(17)

Figure 2.6: How aspheric surface is formed using Eq. (2.8). [7]

2.3 Aberrations

Aberrations are defects in a quality of an image. These are formed when a bunch of rays enter the optical system and miss their ideal unique point on the image plane.

This causes object rays originating from different points to hit the same spot in the image plane, destroying the quality of image in the process. If all of the rays incident on the imaging system hit an unique point in the image plane the image formed is an aberration free image called a stigmatig image. If an image is not stigmatic it is called an astigmatic image. This definition of an astigmatic image is not the same as astigmatic aberration presented in section 2.3.5 and these two definitions should not be confused. [14]

There exists five main types of monochromatic aberrations: spherical aberration, coma, distortion, curvature of the field, and astigmatism [10,11,14]. Chromatic aberrations are defined separately since monochromatic aberrations are also valid for a system with a single wavelength [10,11,14]. These aberrations and their effects to the image quality will be discussed separately in sections 2.3.3- 2.3.7.

2.3.1 Wavefront error

Aberrations in a system can be described by calculating the wavefront error which gives the accurate phase delay between different parts of the incident wavefront at the image plane of the system. It is based on calculating the optical path difference around the chief ray. The chief ray is incident on the image plane at point P,

(18)

with coordinates (ξ0, η0) corresponding to (x0, y0) coordinates. A spherical reference wavefront is drawn throught the axial point of the exit pupil using P as the centre.

The wavefront is noted withS0and it has a radius ofR. Another wavefront deviating byξ and η from P is also drawn similarly. Now the optical path difference between these coordinates along the reference ray can be determined by using Fig.2.7. [13]

Calculating the deviation of a ray perpendicular to S from P can be approximated with a good accuracy by [13]

δη= R n

∂E

∂y0 (2.9)

δξ = R n

∂E

∂x0, (2.10)

where n is the refractive index of the medium. By introducing relative pupil coor- dinates x0 =xr and y0 =yr, where r is the radius of the exit pupil. Now by using the definition r/R= sinu we get [13]

δη = 1 nsinu

∂E

∂y (2.11)

δξ = a nsinu

∂E

∂x. (2.12)

Now by similarly defining relative field coordinatesξ =ξ0r¯and η =η0r, where ¯¯ r is the image field radius, it is possible to write the wavefront errorW(x, y, η, ξ) by the functions of x+y, ξx+ηy, and ξ+η2. Because we are dealing with rotationally symmetric systems we can also assume ξ ≈ 0. By developing a power series for

y

z n'

S0 S

R

E(x,y,')

'

' P

Exit pupil Image plane

Figure 2.7: Wavefront error presented in yz-plane. Wavefront error would be drawn similarly forξ but inxz-plane.

(19)

W(x, y, η) we get a good approximation for the uneven wavefront errors: [13]

W(x, y, η) = a1(x2+y2) +a2ηy+a3η2+b1(x2+y2)2+b2ηy(x2+y2)

+b3η2y2+b4η2(x2+y2) +b5η3y+... (2.13) This wavefront error given by Eq. (2.13) is also known as Hamilton’s expansion and it relates different monochromatic aberrations with coefficients of the different powers. Coefficients a1 −a3 give the first order aberrations, b1 −b5 the third order aberrations, and aberrations of the higher orders also have different coefficients but the third order aberrations have the most influence on the final image. Later in section 2.4 the calculation of the third order aberration coefficients related to the wavefront error W(x, y, η) is presented.

2.3.2 First order aberrations

The first aberration to be considered is defocus which is not always considered as an aberration but is used in minimizing aberrations. Defocus is presented in Eq. (2.13) by the term a1. Defocus is measured from the image plane and is defined as the distance of the image from the focal plane of the system. Image formed outside of the focal plane is blurry and astigmatic, thus aberrated. Defocus depends only on the entrance pupil coordinates and is independent of the object height and field angle thus having an uniform impact on the whole image. Defocusing can be used to reduce aberrations by finding the circle of least confusion and defocusing the image plane to that location. It should be noted that defocus only affects astigmatic and chromatic aberrations but it has no effect on comatic aberrations as it only depends on the entrance pupil coordinates x and y. [10,14]

The other two first order terms a2 and a3 are tilt and magnification error re- spectively. Tilt error refers to the tilting of the complete wavefront which can be expected because of the dependence on field coordinate η and y-coordinate. The magnification error on the other hand only depends on η2 which causes only the magnification at the image plane to change. [13]

2.3.3 Spherical aberrations

Spherical aberration is a monochromatic aberration in which the rays incident on the lens will not focus on a single point on the image plane but instead the rays will

(20)

focus on different points on or beside the optical axis of the lens. Spherical aber- ration is presented by the coefficient b1 in Eq. (2.13) from which can be seen that spherical aberration is a field-independent aberration. In a case of spherical aber- ration an unique imaging point is missing but the rotational symmetry is retained.

Spherical aberration can be divided into longitudinal and transverse aberrations. It should be noted here that longitudinal and transverse spherical aberrations are not independent on one another but both are present simultaneusly and affect eachother.

In Fig. 2.8 these aberrations are presented as independent to present the effect of longitudial and transverse spherical aberration clearly. [14–16]

Longitudinal spherical aberration is present in the imaging system when the focusing point of the lens varies on the optical axis depending on the curvature of the lens, presented in Fig. 2.8. From this figure one can notice that the effect of longitudal spherical aberration can be minimized by defocusing the system. [11]

Longitudal spherical aberration

Transverse spherical aberration

Lens

Optical axis

Figure 2.8: Longitudal and transverse spherical aberration

Transverse spherical aberration on the other hand is described as the distance of the focal point on the image plane and the point on which the ray actually intersects the image plane, as seen in Fig.2.8. This aberration can be minimized by changing the aperture size of the system. [11]

The effect of spherical aberration is the deterioration of image quality. This deterioration is seen as a blurred image. To prevent blurring one can defocus the lens, and find the area of least confusion. This area is the point where transverse spherical aberration is negated completely. Another good way to reduce spherical aberration is to add an ashperical surface. Aspherizing a surface gives more control over the refraction of rays on the extreme ends of a surface, thus reducing the overall spherical aberrations. [10,11]

(21)

2.3.4 Coma

Coma or comatic aberration presents the change of magnification on different zones of entrance pupil presented with coefficientb2in Eq. (2.13). The shape of the incident wavefront can change dramatically when moving away from the optical axis. The airy disk seen on the optical axis starts to take the shape of a cone as the distance from the optical axis increases if coma is present in the system. On the other hand on optical axis coma is typically not present and thus it is affected by the shape of the lens in question. [11,14]

Comatic aberrations can be spotted by the characteristic asymmetric shape in a spot diagram, as shown in Fig.2.9. The comet like shape in the spot diagram is the reason why this type of aberration is known as Coma. [11,14]

Figure 2.9: Coma in a spot diagram. The ideal spot would be a small circle located at the highest concentration of rays at the top of the pattern.

One should note that comatic and spherical aberrations have no dependence on eachother but both do depend on the shape of the lens. For coma the shape of the lens defines the sign of coma. Coma is negative for a strongly concave positive- meniscus lens and positive for strongly convex-meniscus lens. Because the sign of coma changes, depending on the surface profile, it is possible to design a single lens for a finite object distance with no coma. Coma can also be eliminated by choosing the position of the aperture stop accordingly. [11,14]

2.3.5 Astigmatism and Petzval curvature of field

Astigmatism is an aberration in which the sagittal and tangential images form in different locations. In Eq. (2.13)b3is the coefficient which is linked with astigmatism.

The main reason in the different location of tangential and sagittal images is the

(22)

dependence on field angle η and y-coordinate. By looking at Eq. (2.13) it is also evident that if the only affecting aberration is astigmatism, the system will not have astigmatism in x-direction since it only depends on the field coordinate η and y- coordinate. Sagittal and tangential images are considered to be a projection of a point source image. Now if a point source is considered as the object to be imaged the tangential and sagittal images produced by astigmatic system would be line images. If the sagittal and tangential images would focus on the same point the resulting image would be the point source used as the object. In Fig. 2.10 the consept of sagittal and tangetial line images is presented. [10,11]

Figure 2.10: Illustration showing the formation of astigmatic line images. [10]

Another primary aberration that is closely linked to astigmatism is field cur- vature. Field curvature is present in every optical system since it is a function of the refractive indices of the lenses and their surface curvatures. It is presented in Eq. (2.13) by the term b4. This basic curvature in optical system is called Petzval curvature9. [11]

Field curvature is present off-axis as can be seen in Fig.2.11. The field curvature for tangential and sagittal focal surfaces10can deviate from the basic Petzval curva- ture resulting in different focal points for tangential and sagittal fields as the angle of field increases. It should be noted that sagittal and tangential focal surfaces stay on the same side of Petzval curvature in all cases. When there is no astigmatism the tangential and sagittal focal surfaces lie on the Petzval surface. It should be noted that field curvature is defined as longitudal departure from the ideal flat field. [10]

9Named after the Hungarian mathematician Josef Max Petzval [11].

10The surface that is formed when a line is drawn across the focal points of the field.

(23)

Lens

Optical axis Petzval curvature Sagittal field curvature Tangential field curvature

Figure 2.11: Petzval, sagittal and tangential field curvature. Dashed blue line presents tangetial field curvature, red dotted line presents sagittal field curvature, and black solid line presents Petzval curvature. In this case astigmatism and Petzval curvature are undercorrected.

The sign of astigmatism is defined by the field curvature: When astigmatism is positive the tangential field is to the right of the sagittal field and to the left if astigmatism is negative. Negative astigmatism is also known as undercorrected and positive as overcorrected astigmatism. The Petzval curvature is defined as either undercorrected(forward curving) or overcorrected(backward curving). The sign of the lens defines if the Petzval curvature is forward or backward curving: Positive lenses introduce forward and negative lenses introduce backward curvature to the field. [10]

2.3.6 Distortion

The last one of the main monochromatic aberrations is called distortion and it is presented in Eq. (2.13) by the term b5. This aberration causes the incident off-axis rays to bend either towards the center of axis or away from the axis resulting in a change of magnification in different zones of entrance pupil. This can clearly be seen by its dependance on field coordinate η and y-coordinate in Eq. (2.13). Distortion is thus an aperture independent comatic aberration. [10,14]

Although distortion changes the location of the off-axis rays on the image, dis- tortion has no effect on the focus of the final image. Only the height of the off- axis images are changed and thus the magnification changes as a function of axial distance resulting in a symmetric movement of image points. When distortion is

(24)

positive each image point is moved outward the axis but the most distant points moving the most. The same happens for negative distortion but the movement is towards the axis. This movement of off-axis points gives the image formed a pattern which appears as a pincushion when distortion is positive and a barrel like appear- ance when distortion is negative, as can be seen from Figs. 2.12(a) and 2.12(b).

Thus positive distortion is know as a pincushion distortion and negative is know as a barrel distortion. [10,11]

(a) Barrel distortion. (b) Pincushion distortion.

Figure 2.12: Grid distortion charts for barrel and pincushion distortion. The value of distortion is −15% and 15% respectively. The crosses present the image formed by the imaging system and the grid presents the image at object plane.

2.3.7 Chromatic aberration and dispersion in optical material

There exists a relation between incident wavelength λ and the refractive index of materialn: as the wavelengthλincreases the refractive indexnof a material changes resulting in different focal points for different wavelengths illustrated in Fig. 2.13.

This relation is known as chromatic aberration. It can be spotted on the sharp edges in an image if the aberration is large enough. Chromatic aberration can be divided into longitudial and transverse axial aberrations, as in the case of spherical aberration. The effect of chromatic aberration in a positive lens is illustrated in Fig.2.13for blue and red light. Chromatic aberration is not part of the five primary aberrations but is considered as a separate aberration. [10,11]

(25)

Longitudal axial chromatic aberration

Transverse axial chromatic aberration Lens

Optical axis

Figure 2.13: Illustration of axial chromatic aberration for blue and red light. Solid blue line presents the blue light and red dashed line presents red light.

Chromatic aberration can also change magnification similarly to comatic aberra- tions but now it changes as a function of wavelength. This change in magnification for different colors is called lateral color or chromatic difference of magnification.

Lateral color is also present if the image of an off-axis point is spread into a rain- bow. [10]

As chromatic aberrations are present the monochromatic five primary aberrations will vary for different wavelengths. This can be expected since all wavelengths will bend differently at different surfaces. Generally this may not always be the case but it should be taken into consideration when the primary aberrations are well corrected. [10]

Dispersion is related to chromatic aberration but it is not exactly the same as chromatic aberration. Dispersion is defined as the the rate of change in refractive index n with respect to wavelength λ which is expressed as dn/dλ. Dispersion is large for shorter wavelengths and small for longer wavelengths. Generally dispersion is not used in defining the material if the working region is in visible spectrum. In these cases the characteristics are commonly described by using two parameters: the refractive index in heliumd line nd11 and by using the Abbe V-number: [10]

V = nd−1

nF −nC, (2.14)

where nF and nC are the refractive indices for the hydrogen F12 and the hydrogen

110.5876µm [10]

120.4861µm [10]

(26)

C13 lines respectively [10,11]. It should be noted that the basic refracting power of a material can be described withnd−1 and the measure of dispersion in Eq. (2.14) is ∆n =nD−nF. Thus the V-number gives the relation between dispersion and the amount of bending light undergoes inside the material. A common way to present glasses with nd and V-number is to use Abbe diagram, presented in Fig. 2.14. [10]

Figure 2.14: Abbe diagram. nd is the refractive index at Frauenhofer d-line and V is the AbbeV-number. [14]

The V-number does not always give sufficient information about a material. If there exists a need to work in a secondary spectrum, the relative partial dipersion

PC = nd−nC

nF −nC (2.15)

should be used. The relative partial dispersion measures the rate of change in the slope of the refractive index versus the wavelength. [10]

Chromatic aberrations are reduced by utilizing the dispersion of different optical materials. All of the glasses are divided arbitrarily into two groups: crown and flint.

The crown glass is defined to have Abbe V-number of 55 or more for a refractive index below 1.60, or to have a V-number of 50 or more for a refractive index over 1.60. The flint glasses then include all of the glasses which are not defined as crown.

Crown glass is identified by the last letter K in the name of a glass, and flint is identified by F respectively. This division can be seen from the Abbe diagram of Fig. 2.14. [10,14]

130.6563µm [10]

(27)

A lens which corrects chromatic aberrations is called an achromat. Lens is an achromat if the focal lengths of different wavelength are almost similar. Achromatic lenses can be created by cementing positive and negative lenses made of different glasses. Commonly a positive crown and a negative thin flint glass elements are used in conjuction to create achromats. [10,14]

2.4 Third order aberration theory for spherical and aspher- ical surfaces

The aberrations of an optical system discussed in previous section can be presented mathematically by calculating the deviation of a ray from the ideal position on the image plane and creating a power series of the contributions, as presented in section 2.3.1. These have been divided by their respective orders but for an axially symmetric system the aberrations are of the oddth order since the symmetric shape of the system eliminates the even order aberrations unless the lens itself has been tilted. The aberration coefficients of the first order can easily be eliminated just by changing the location of image plane and thus do not give significant information about aberrations inside a system. The next order is the third order contributions and they are followed with fifth order. The higher order contributions are related to the lower order contributions directly. This means that the third order aberrations have the highest impact on the total aberrations and thus third order aberrations are of the highest importance. [10]

Third order aberrations are calculated by using two paraxial rays: axial14 and the principal ray. [10] These two rays will be traced throught the system using ray tracing equations and the Lagrange invariant Lwhich is needed in order to ease the calculation process. Since Lagrange invariant is invariant throught the system, the first and last surface of the system are used to determine it:

L=ypnu−upny =hknkuk, (2.16) where y is the height on which the ray strikes the surface, n is the refractive index of the material, and u denotes the angle of refraction. Subscript p refers to the principal ray, k indicates the last surface, and no subscript is used for the axial

14Originates from the axial intercept of the object and propagates through the edge of the entrance pupil. [10]

(28)

Lens 1 Lens 2

Aperture stop Object

Image

u h

h' u'

Figure 2.15: Illustration of using principal and marginal ray in defining the La- grange invariant.

ray. Also, superscript notes that the coordinates are located on the image side of the surface. Lagrande invariant is illustrated in Fig. 2.15. The point on which the paraxial principal ray intercepts the image plane gives the image height

h = L

nkuk, (2.17)

wherenk is the refractive index, and uk is the refraction angle of the axial ray after the last surface of the system. [10]

Now we need to define the paraxial angle of incidence iand ip:

i=cy+u (2.18)

ip =cyp+up, (2.19)

where c is the curvature of the surface. Next variables B and Bp are formed from the ray tracing data:

B = n(n−n)

2nL y(u+i) (2.20)

Bp = n(n−n)

2nL yp(up+ip). (2.21) By formingB,Bp, and calculatingiandip, the transverse aberrations can be written

(29)

as:

σ1 =Bi2h (2.22)

σ2 =Biiph (2.23)

σ3 =BIp2h (2.24)

σ4 = −(n−n)chL

2nn (2.25)

σ5 =h(

Bpiip+ (up)2−u2p 2

)

, (2.26)

where the subscripts 1−5 correspond to spherical aberration, coma, astigmatism, Petzval field curvature, and distortion in this order. The corresponding longitudinal aberrations for spherical aberration, astigmatism, and Petzval field curvature can be calculated by dividing σ13, and σ4 with −uk.

Seidel coefficients Sj can now be calculated from the transverse third order con- tributions by multiplyingσ1−σ5 with−2nkuk. These are known as Seidel coefficients which give the amount of aberration for a single surface as one numerical value. By summing all of the Seidel coefficients the effect of aberrations in the whole system can be analyzed. These are known as Seidel sums. By noting the surface being cal- culated with subscript j and the last surface with k the Seidel sums can be written as: [10]

S1,sum=−2nkuk

k

j=0

σ1,j Spherical aberration (2.27)

S2,sum=−2nkuk

k

j=0

σ2,j Coma (2.28)

S3,sum=−2nkuk

k

j=0

σ3,j Astigmatism (2.29)

S4,sum=−2nkuk

k

j=0

σ4,j Petzval field curvature (2.30)

S5,sum=−2nkuk

k

j=0

σ5,j Distortion. (2.31)

The value of Seidel sums is commonly in lens units and the sum can be calculated for any number of surfaces. It should be noted that the Eqs. (2.27) - (2.31) have

(30)

been calculated for transverse values and longitudial values must be calculated as well. Sagittal and tangential Seidel sums differ for coma: the sagittal coma is the Seidel sum S2 but for the tangential value the Eq. (2.28) must be multiplied by 3.

It should also be noted that sagittal and tangential astigmatism and Petzval field curvature depend on each other15 by the following relation: [10]

zs ≈S3,sum+S4,sum (2.32)

zt ≈3S3,sum+S4,sum, (2.33)

where zs is the sagittal andzt tangential curvature of the field. [10,13]

Seidel sums in Eqs. (2.27) - (2.31) can be related to the third order wavefront errors W(x, y, η) presented in Eq. (2.13) by using a sum of the longitudial and transverse Seidel sums. Now by marking the sum of longitudal and transverse Seidel sums asS1,real toS1,real the aberration coefficients for wavefront errors W(x, y, η) of the third order are: [13]

b1 = 1

8S1,real, b2 = 1 2S2,real, b3 = 1

2S3,real, b4 = 1

4(S3,real+S4,real), (2.34) b5 = 1

2S5,real.

The contribution of aspherical surface can be presented in third order aberrations theory as well. For an asphere one has to use Eq. (2.8) and r2 = x2 +y2. First Eq. (2.8) has to be presented with an power series of r2: [10]

z = 1/2Cer2+ (1/8Ce3+K)r4+..., (2.35) where higher order constants can be neglected due to their small significance. In Eq. (2.35) the equivalent curvatureCe and fourth order deformation constantK are defined as: [10]

Ce =C+ 2α2 (2.36)

K =α4−α2

4 (4α22+ 6Cα2+ 3C2), (2.37)

15As discussed in Section2.3.5.

(31)

where αi is the ith order constant from Eq. (2.8) and C is the curvature of the surface. Now the total third order aberrations of aspherical surface σ1,a−σ5,a can be presented by adding the respective spherical contributions σ1−σ5 of the surface with the aspherical contributions: [10]

W = 4K(n−n)

L (2.38)

σ1,a1+W y4h (2.39)

σ2,a2+W y3yph (2.40)

σ3,a3+W y2y2ph (2.41)

σ4,a4+ 0 (2.42)

σ5,a5+W yy3ph. (2.43)

Intrestingly, one can see from Eq. (2.42) that the aspherical surface has no con- tribution to the Petzval field curvature and thus it can not be corrected with an asphere. Also, if the aspherical surface is located at the aperture stop the only aberration affected by the asphere is spherical aberration [10]. On the other hand if the aspherical surface is expected to affect other aberrations excluding Petzval field curvature the surface should be located a significant distance from the aperture stop.

2.5 Modulation Transfer Function

The performace of a lens is generally measured by using a Modulation Transfer Function (MTF). In MTF measurements one would use pairs of light and dark lines with equal widths to measure the contrast of a system. One such pair is commonly referred to as a cycle. The performance of the system is defined by how many cycles per unit length can the system distinquish and on what modulation. ModulationM is defined similarly to a contrast of the image

M = Imax−Imin

Imax+Imin, (2.44)

whereImaxis the maximum illumination level of the image andIminis the minimum.

Now because the images used to calculate the illumination levels from are line images, the modulation M can be used to determine the image quality. [10]

(32)

Now by using the spatial frequency16 ν and modulation M we can define the MTF by taking the ratio of the modulation Mi in an image and modulation Mo in the object as a function of the spatial frequency ν:

MTF(ν) = Mi

Mo. (2.45)

The real MTF of a system is defined by considering a transfer of the modulation depth of an object with a sinusoidal intensity distribution from the object plane to the image plane. By assuming the object field as incoherent we can consider the object as a continuous distribution of point sources with uncorrelated amplitudes for which the intensity at the objectIo can now be described as: [13]

Io(x0, y0) =I0[1 +Mocos(2π[x0fx+y0fy])], (2.46) where I0 is the intensity of light before the object, and νx and νy are the spatial frequencies inxandydirections respectively. As all point sources at the object plane create a corresponding diffraction spot in the image plane, the intensity distribution at the image plane Ii can be written as [13]

Ii(x, y) =

∫ ∫

Io(x0, y0)h(x−x, y−y)dxdy, (2.47) whereh(x, y) is the point spread function (PSF)17. By defining the PSF to take into account the possible wavefront aberrations in the exit pupil PSF has the form [13]

h(x, y) =⏐

A(x, y) A(0,0)

2

, (2.48)

whereA(x, y) is the amplitude of the wave in the image plane. Before we can define the real MTF we must first take the Fourier transform of the PSF: [13]

H(νx, νy) =

⏐H(νx, νy)

⏐eiΦ(νxy)=

∫ ∫

h(x, y)e2πi(xνx+yνy)dxdy, (2.49) where the function Φ(νx, νy) is known as the phase function, m present the magni- fication of the system, νxx/m, and νyy/m. Now by substituting Eq. (2.46)

16Cycles per unit length

17PSF is the normalized intensity function of a point objects diffraction pattern [13].

(33)

into Eq. (2.47), findingH(νx, νy) andH(0,0) by replacing x0 withx/mand y0 with y/m we get the optical transfer function [13]

OTF = H(νx, νy)

H(0,0) . (2.50)

The real part of the OTF is the MTF and thus Eq.(2.45) can be written as: [13]

MTF(ν) =|OTF|= |H(νx, νy)|

H(0,0) . (2.51)

The imaginary part of the OTF is the phase transfer function (PTF) of a system [13].

PTF can also be used to measure aberrations since if the PTF is linear with frequency the aberration in question is lateral displacement of image but for nonlinear PTF effects on image quality can be spotted. The main emphasis generally in lens design is to use the MTF as a presentation of the performance of the system. MTF is a powerful tool because by plotting the MTF against the spatial frequency ν it is possible to tell the performance of an image-forming system from just a single figure. [10,13]

MTF can be applied to lenses, films, phosphors, image tubes, and to a complete systems. One advantage of MTF is that it can be calculated easily for a combination of two or more systems by just multiplying the MTF value of the first and second system. Intrestingly, this does not work for optical components which are directly coherently connected to one another without a diffuser in between. This is caused by the fact that aberrations in two or more optical components can negate eachother and provide a better overall performance. [10]

2.6 Operands and the Merit function

In Zemax Opticstudio the optimization of a lens is handled by numerical calculations and defined operands. Operands are the limits user gives Opticstudio to guide the lens design process. Some operands have higher user defined weight in order to prioritize the optimization process towards certain direction. In optimization process Merit function is used to define if the new iteration of the lens is better than previous one. It is defined as:

MF2 =

∑Wi(Vi−Ti)2

∑Wi , (2.52)

(34)

where Wi is the absolute value of the user defined weight, Vi is the current value, and Ti is the target value of the operandi. Merit function thus gives one numerical value that presents how far the current iteration is from the ideal user defined system, where zero presents the ideal system. [12]

2.7 Tolerancing

Tolerancing is a process in which the sensitivity of the design to manufacturing errors is determined [13]. This is the last and most essential step in the design process in which the final price of manufacturing the system will be realised. Tolerances are the actual limits for the fabrication errors. Modern commercial optical design softwares generally include tolerancing features [10].

The basic tolerancing procedure consists of multiple steps of which the first is to set the basic of tolerances for the lens. For this a default tolerancing generation algorithm can be used if available, also the default tolerances should be modified or new operands added to specify the system requirements. Next step would be to add compensators and set the allowed ranges and surfaces to be compensated from for these operands. Possible compensators include, for example back focal length and tilt of the image surface. After this process the criteria on which to perform the tolerances must be set which can be a simple RMS spot radius, wavefront error, MTF, boresight error, or something else entirely. This depends on the designers preference, system requirements, and available design software. The last step is performing the analysis and reviewing the tolerance data. [12]

The analysis of the tolerances can be made using a number of numerical methods where the tolerance range is analyzed for all of the parameters of the system invid- ually. A powerful analysis is the Monte Carlo simulation where sensitivity18 and inverse19 sensitivity analysis is used to consider the effect on system performance invidually for each tolerance. Monte Carlo is used to estimate the aggregate effect of all tolerances by creating a series of random lenses which meet the tolerances. The criteria is then evaluated for all of these randomly generated lenses from real ray tracing data. This method provides accurate simulation of expected performance

18Change in criteria for specified tolerances is determined invidually for each tolerance [12].

19The limit for each tolerance is calculated invidually for a given permissable change in criteria [12].

(35)

for the actual manufactured lens. [12]

The hardest part in tolerancing is finding the limits for the acceptable errors since optical manufacturing is possible with almost any degree of precision possible as long as sufficient amounts of time and money are available. The only limit is the measuring accuracy since it is impossible to conduct precision manufacturing if one can not measure the fabrication progress precisely. This amount of available precision can make design too expensive if the specified tolerances are too strict.

Underspecified tolerances on the other hand can cause severe errors to appear on the surfaces and thus destroying the performance of the system. Enough time and effort should be spent on specifying the tolerances and the required performance of the design to prevent large manufacturing errors from reaching the customer. [10]

(36)

Chapter III

Design and manufacturing considerations

In this Chapter a spherical Double-Gauss lens will be introduced as a base for an aspherical design. Multiple surfaces from this lens will be transformed into aspherical shape. Design of the lens will be conducted using Zemax Opticstudio. Also, the manufacturing and financial aspect of aspherical lenses is discussed.

3.1 Spherical Double-Gauss lens

A spherical Double-Gauss lens system is used as a base for aspherization, presented in Fig. 3.1. The systems weight is 31.6 g and it consists of seven lenses in total.

Figure 3.1: Spherical Double-Gauss lens used as base for aspherical design. AS is the aperture stop, and L1 - L7 present the different lenses used.

(37)

The effective focal length (EFFL) f is 27 mm, and the working f-number is f /1.3. The lens system is designed for a wavelength range of 450−950 nm and a field of view (FOV) of 40. The chromatic focal shift of this objective is 78.69µm presented as a function of wavelength in Fig. 3.2.

The objective in question suffers from relatively high barrel distortion at ≈

−6.2 mm, as can be seen in Fig. 3.3(a). Although the distortion is high, the distor- tion in aspherization phase will not be improved upon. This is due to the fact that it is not needed to improve for the purpose of this objective.

The field curvature of objective is quite different for sagittal and tangential fields, as can be seen in Fig. 3.3(a). The shape of the curvature is similar but as the y- coordinate increases the more sagittal field goes negative, while tangential field stays the same. At the edge both fields go towards positive field curvature quite rapidly.

The MTF curves for this lens are presented in Fig. 3.4(a). From this figure one can see that the performance of this lens drops quite sharply for 14 and 20 angles.

After ≈ 40 cycles/mm the performance drops to zero, but some recovery can be seen afterwards. This drop is likely caused by comatic aberrations and distortion in the objective especially on edge fields. This happens because coma and distortion affect the edge fields especially while the center remains unaffected. The MTF at central field is a key charasteristic for this objective. It is 0.65 at 50 cycles/mm and it should remain near that level unless other parameters are improved significantly.

The effects of different surfaces on aberrations can be seen in Fig. 3.4(b). The main aberrations this objective suffers from are distortion and coma, as can be seen from the sum of seidel aberrations in Fig. 3.4(b). Some astigmatism and spherical

Figure 3.2: Chromatic focal shift of the objective.

(38)

(a) Distortion and field curvature in spher- ical Double-Gauss lens. T presents the tan- gential and S sagittal field.

(b)Lateral color for the spherical lens.

Figure 3.3: Diagrams showing field of curvature, distortion and lateral color for the spherical lens.

aberrations can be seen on the diagram but their value is low when compared to coma and distortion. The spot diagram in Fig.3.5(a) also shows some effect of coma on this objective and it can be seen especially on the edge fields of 14 and 20, as one would expect.

(a)MTF versus field curve. (b) Seidel diagram showing the effect of aberrations for all surfaces and their sum.

Figure 3.4: MTF and Seidel diagram for the spherical lens.

(39)

(a) Spot diagram. (b) Vignetting diagram.

Figure 3.5: Spot size and vignetting diagram for the spherical lens.

The spot size of the objective is around 38 µm for 0 and 20 fields but for the 14 field the spot size grows to 56 µm, seen in Fig. 3.5(a). This can also be seen in the MTF curve where sagittal 14 fields MTF dropping sharply in Fig. 3.4(a) around 25 cycles/mm. Some coma can be seen on the 14 and 20 fields in the spot diagram. The smaller spot size on the edge field is mainly caused by the large amount of vignetting, as seen in Fig. 3.5(b).

The vignetting curve in Fig. 3.5(b) shows how big portion of rays do not prop- agate throught the whole system. The vignetting at the edge fields is currently at 24.7% and could be improved for this objective to improve the light collection ability of the objective.

3.2 Aspherical designs

The objective of the apherization of the lens presented in previous section 3.1 is to reduce the cost, size, or weight of the lens. Also, the possibly of improving the optical properties of the lens is a priority. There exist multiple ways to achieve these results and the different solutions take time and numerous calculations to find, as is usually the case in optical design [10]. It is also a possibility that this objective can not be improved by just one aspherical surface and more may be needed.

The main parameters of the spherical design will not be changed. The EFFL will remain at 27 mm, the variation off-number is limited tof /1.3, the field of view will remain at 40, and the size of entrance pupil is not touched. All of the thicknesses

Viittaukset

LIITTYVÄT TIEDOSTOT

Vuonna 1996 oli ONTIKAan kirjautunut Jyväskylässä sekä Jyväskylän maalaiskunnassa yhteensä 40 rakennuspaloa, joihin oli osallistunut 151 palo- ja pelastustoimen operatii-

Tornin värähtelyt ovat kasvaneet jäätyneessä tilanteessa sekä ominaistaajuudella että 1P- taajuudella erittäin voimakkaiksi 1P muutos aiheutunee roottorin massaepätasapainosta,

(Hirvi­Ijäs ym. 2017; 2020; Pyykkönen, Sokka &amp; Kurlin Niiniaho 2021.) Lisäksi yhteiskunnalliset mielikuvat taiteen­.. tekemisestä työnä ovat epäselviä

Kulttuurinen musiikintutkimus ja äänentutkimus ovat kritisoineet tätä ajattelutapaa, mutta myös näissä tieteenperinteissä kuunteleminen on ymmärretty usein dualistisesti

Vaikka tuloksissa korostuivat inter- ventiot ja kätilöt synnytyspelon lievittä- misen keinoina, myös läheisten tarjo- amalla tuella oli suuri merkitys äideille. Erityisesti

Kandidaattivaiheessa Lapin yliopiston kyselyyn vastanneissa koulutusohjelmissa yli- voimaisesti yleisintä on, että tutkintoon voi sisällyttää vapaasti valittavaa harjoittelua

The difference may be related to the different efficiency of the symbiosis in the two plants: a higher efficiency in the colonization per- centage was obtained with red

Others may be explicable in terms of more general, not specifically linguistic, principles of cognition (Deane I99I,1992). The assumption ofthe autonomy of syntax