• Ei tuloksia

A modular LabVIEW program for controlling multimodal microscope imaging platform

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "A modular LabVIEW program for controlling multimodal microscope imaging platform"

Copied!
76
0
0

Kokoteksti

(1)

A MODULAR LABVIEW PROGRAM FOR CONTROLLING MULTIMODAL MICROSCOPE IMAGING PLATFORM

Faculty of Engineering and Natural Sciences Master of Science Thesis April 2020

(2)

ABSTRACT

QI YUAN: A Modular LabVIEW Program for Controlling Multimodal Microscope Imaging Platform Master of Science Thesis

Tampere University

Factory Automation and Industrial Informatics Engineering Examiners: Professor Jari Hyttinen and Professor Pasi Kallio April 2020

Automating a multi-device imaging setup is often needed in the bioengineering field not only to enable effective image data acquisition but also to enhance the accuracy of information content.

Multimodal imaging shows great potential in the growing field on biosciences, for example, in cell and tissue culture imaging.

This master thesis project aims to develop and build a modular, loose-coupled LabVIEW soft- ware that automates the multimodal microscopy imaging platform which integrates the optical projection tomography (OPT) and electrical impedance tomography (EIT). The integrated software must enable OPT and EIT imaging modes and most importantly it allows fast simultaneous data acquisition for OPT/EIT imaging mode within a full 360rotation. The data content acquired from the integrated software should be of high quality and usable for image reconstruction. Meanwhile, the architecture of the software should be modularized and extensible for further development.

The integrated software was implemented with LabVIEW, and it is able to provide a user interface with three different imaging modes: OPT, EIT and OPT/EIT. Producer-consumer and event-driven design patterns were utilized into the software architecture and Object-Oriented Pro- gramming (OOP) was adopted for the implementation of individual software components, which makes the integrated software modular and scalable. In addition, various tests were implemented to assess the integrated software in terms of time performance, the accuracy of imaging data and the feasibilities of simultaneous rotational EIT and OPT data acquisition with enhanced time-cost performance.

Through the tests, the integrated software demonstrated that it acquired the same quality data compared with data from original software regarding the precision of data content from three modes and it also enables synchronous imaging data acquisition with an excellent time-cost per- formance that shortens the whole imaging time significantly. Besides, it simplified the workflow of the imaging process regardless of whatever imaging mode is selected and also the configuration of the imaging setup as well. Additionally, the integrated software is modular that makes it possi- ble to incorporate other functionalities. Therefore, the integrated software fulfills the requirements of the thesis.

In the future, if other new, innovative imaging methods are added into the LabVIEW software, the integrated software will provide more flexibility and the possibility to acquire different imaging data all at once.

Keywords: LabVIEW, 3D imaging, graphical programming, modularity

(3)

PREFACE

This master thesis project was implemented in Tampere University, Arvo building, Tam- pere in Computational Biophysics and Imaging Group led by Prof. Jari Hyttinen.

First of all, I would like to sincerely appreciate both of my examiners who trust and en- courage me even though I had to learn LabVIEW programming from scratch.

Secondly, I would like to thank M.Sc Ana Soto who made the original OPT codes, M.Sc Toni Montonen who provided technical support regarding integrating the OPT codes and Ph.D Raul Land (Tallinn University of Technology, Estonia) for developing the original EIT codes.

A special thank is for M.Sc Mari Lehti-Polojärvi who guided me through the complete master thesis project, helped me overcome the problems I had and provided the rotational OPT/EIT mode test data and image reconstructions.

Lastly, I appreciate that I had this valuable opportunity to collaborate with researchers from different backgrounds, learn novel imaging methods like OPT and EIT and also get familiar with graphical programming in LabVIEW throughout the whole project. In addition, I would like to thank my family who has been supporting me all the time and without all of you, I could not be here today.

Vantaa, 28th April 2020

QI YUAN

(4)

CONTENTS

1 Introduction . . . 1

2 Literature Review . . . 3

2.1 Imaging Methods . . . 3

2.1.1 Optical Projection Tomography . . . 3

2.1.2 Electrical Impedance Tomography . . . 4

2.2 Virtual Instrument . . . 6

2.2.1 Definition and Principle . . . 7

2.2.2 Software Structure . . . 8

2.2.3 Interface Bus . . . 10

2.2.4 Advantage and Disadvantage of Virtual Instruments . . . 11

2.3 Software Design . . . 11

2.3.1 Software Architecture . . . 11

2.3.2 Modularization . . . 13

2.3.3 User Interface . . . 15

3 Research Methodology and Materials . . . 16

3.1 The Integration of Imaging Methods . . . 16

3.2 Software Workflow and Algorithm Design . . . 18

3.3 The Key Components and Techniques For The System . . . 20

3.3.1 Programming Language and Development Software . . . 20

3.3.2 Computer . . . 21

3.3.3 Camera . . . 21

3.3.4 EIT Measurement Device . . . 22

3.3.5 Interface Buses . . . 23

3.3.6 Sample Manipulation Platform . . . 23

3.4 Software Implementation . . . 25

3.4.1 The Architecture of Framework . . . 25

3.4.2 The Implementation of Queue . . . 27

3.4.3 UI Structure Design . . . 30

3.4.4 The Implementation for Motors and Camera . . . 31

3.4.5 The Implementation of EIT Device . . . 34

3.4.6 The Implementation of OPT, EIT and OPT/EIT Modes . . . 35

3.4.7 UI Design . . . 39

3.4.8 Error Handling . . . 42

3.5 Samples and Testing . . . 42

3.5.1 EIT Data Verification . . . 43

3.5.2 OPT Data Verification . . . 44

3.5.3 OPT/EIT Mode Test . . . 45

(5)

4 Result and Discussion . . . 46

4.1 User Interface Analysis . . . 46

4.2 Test Results Analysis . . . 51

4.2.1 EIT Result . . . 51

4.2.2 OPT Result . . . 55

4.2.3 OPT/EIT Result . . . 57

4.2.4 Time Costs Performance Analysis . . . 59

4.3 Functionality and Usability . . . 61

4.4 Challenges . . . 62

4.5 Future Work . . . 63

5 Conclusion . . . 64

References . . . 65

Appendix A Test Results Analysis . . . 68

A.1 EIT Result Analysis . . . 68

A.1.1 EIT Raw Data from EIT Mode . . . 68

(6)

LIST OF SYMBOLS AND ABBREVIATIONS

∆σ Conductivity Change

2D Two-Dimension

3D Three-Dimension

ADC Analog-to-Digital Converter

API Application Programming Interface

CMOS Complementary Metal Oxide Semiconductor CPU Central Processing Unit

DAQ Data Acquisition DLL Dynamic Link Library DSP Digital Signal Processor

EIT Electrical Impedance Tomography FBP Filtered Back-Projection

FGV Functional Global Variable FIFO First-In-First-Out

GPIB General Purpose Interface Bus I/O Input/Output

IDE Integrated Development Environment

LabVIEW Laboratory Virtual Instrument Engineering Workbench MAC Multiply-and-Accumulate

OOP Object-Oriented Programming OPT Optical Projection Tomography PBS Phosphate Buffered Saline PC Personal Computer

QMH Queued Message Handler RAM Random-Access Memory RMS Root Mean Square

RS-232 Recommended Standard 232 TAB Tabbed Document Interface USB Universal Serial Bus

(7)

VI Virtual Instrument

VISA Virtual Instrument Software Architecture VXI VME eXtension for Instrumentation

(8)

1 INTRODUCTION

Automating the imaging data acquisition setup is one of the critical issues to acquire good quality imaging data. A well-implemented, customized program would be very helpful for researchers to capture good data sets and reduce the data acquisition time.

Recently, Electrical Impedance Tomography (EIT) [1] and Optical Projection Tomography (OPT) [2] have been proposed and gained the popularity for imaging the internal structure of a sample, for example, cell cultures. OPT is a non-invasive, high-resolution imaging technique that images cells in a Three-Dimension (3D) mesoscopic environment. More- over, EIT is also a non-invasive, non-damaging and quick imaging technique in which the electrical impedance of the samples is measured to reconstruct a conductivity image.

Compared to mechanically sectioning the sample, these methods remain sample intact but observe the sample with a depth that is not reachable for conventional microscopy.

In order to improve the accuracy of the imaging and acquire more information about a sample, researchers have been working with the integration of multiple imaging modali- ties together to simultaneously image the same specimen. For example, the integration of electrical and optical imaging would provide a novel imaging tool in the bioengineering field [3].

However, in the imaging process, the biologic living sample changes quickly over time.

Therefore, a fast imaging tool enabling simultaneous multimodality imaging is crucial which supports to acquire data from multiple imaging methods at the same time within a short period.

In Prof. Jari Hyttinen’s laboratory, a multimodal imaging hardware setup was built up prior to this work and this setup allowed researchers to collect data from OPT [4] and EIT [5] methods separately with separate software. Both methods were designed to image a rotated sample. The original OPT software supported the motor rotation functions during the data acquisition phase but the original EIT software did not support the motor functionalities. Consequently, the original EIT software was unable to rotate the sample while acquiring data. Due to these limitations, the original OPT and EIT software were impossible to fulfil simultaneous data acquisition. Instead, the EIT software required a user to do sample rotation and data saving manually which was slow and prone to errors.

Therefore, a modular and efficient software program was needed to manage the imaging hardware fully automated. The new software should be able to provide strong support for multimodal imaging device functionalities and for automating the data acquisition process.

(9)

The objective of the thesis project was to build a customized, modular LabVIEW soft- ware for our in-house built imaging hardware setup. During the process of the software development cycle, there were certain requirements. First of all, the integrated software shall reuse parts of the original software for cutting down the whole development time.

Especially for EIT module development, the original EIT codes for data acquisition and visualization must be kept unchanged to remain the exact same functionalities when in- tegrating it into OPT module. Secondly, each critical software component of this imaging setup must be modular and loose-coupled to each other, which means each component should achieve its functionalities on their own without the interference of others. Lastly, the integrated software should be enabling of adding other optical imaging devices in the future if needed.

In terms of the performance, the integrated software must give user options to choose when acquiring the image data, either OPT mode, EIT mode or OPT/EIT mode. No matter which mode the user selects, it must support flexible data acquisition angles and reduce time costs for the whole process as well. Meanwhile, the user interface of the software must be simple and easy to operate without separate instructions, so that any user could go through a data acquisition process easily.

Lastly, the outline of the thesis consists of five main chapters. The second chapter ex- plains critical theoretical concepts needed to implement the thesis work. Chapter 3 dis- cusses the research methodology and material to fulfil the aim of the thesis, and it also includes the implementation to validate the result of the designed system. The fourth chapter illustrates the results of the thesis work and presents a discussion of the inte- grated software and the last chapter concludes the thesis work.

(10)

2 LITERATURE REVIEW

In this chapter, the focus is to describe the state-of-the-art technologies related to this master thesis project. It consists of three primary parts: modern cutting-edge 3D imaging methods which mainly cover OPT and EIT, the virtual instrument and software develop- ment. These three main sections found a solid theoretic background for the project.

2.1 Imaging Methods

2.1.1 Optical Projection Tomography

Traditional optical microscopy usually either optically slices or images mechanically sec- tioned specimens to construct a 3D image that could damage the sample inevitably. Be- cause of that, the thickness of a specimen matters significantly to the quality of the 3D imaging. In order to overcome this shortcoming, OPT [2] was developed to produce good quality and high-resolution 3D images in a non-invasive imaging approach. With OPT, it is possible to image a sample with a thickness between 1 millimetre to 10 millimetres.

Normally, OPT consists of two different modes: fluorescent OPT and bright-field OPT [4].

A typical OPT system is illustrated in figure 2.1. This in-house-built OPT setup supports both bright-field OPT and fluorescent OPT. The test samples are attached on a tube and immersed in a transparent chamber with aqueous solution. The platform B is used for the sample manipulation. It can rotate the sample 360clockwise during the imaging phase.

A rotational step motor is installed on platform B and it is used to take one projection image at each rotational position.

For the bright-field OPT, the sample is illuminated by the white light source (LED 1) with a telecentric backlight illuminator that collimates the lights. After passing through the sam- ple chamber, the light beams are gained collimated and gathered by the image detection system which consists of a long working distance objective (Ob), a pinhole (F), a tube lens (TL) and a sCMOS camera. At the end of the stage, the sCMOS camera captures projection images of the sample.

For the fluorescent OPT imaging, the samples have to be autofluorescent or stained beforehand. Another light source LED (LED2) is used to excite fluorophores in the sample cells and the light is directed through a bandpass filter (F) and a diffuser lens (LD). Then

(11)

Figure 2.1. The OPT system includes three parts: illumination, sample manipulation and image detection. The illumination components of Bright-field OPT consists of a white light source (LED 1) and a telecentric lens (L). The fluorescent OPT illumination has a band- pass filter (F), a diffuser lens (LD) and a light source (LED 2). The sample manipulation part includes a sample (S) and a platform (B). The image detection system is composed of the objective lens (Ob), a filter in a filter wheel (FW), a pinhole (P), a tube lens (TL), and a sCMOS camera [6].

emitted lights are collected by the same image detection system as bright-field OPT does except now a suitable emission filter in the filter wheel is used.

The projection images acquired from the OPT process cannot be viewed as one 3D image directly. The most common way of generating a 3D image is that all the data collected needs to be converted into 3D volumetric data which means a volumetric 3D re- construction will be adopted with MATLAB by employing a Filtered Back-Projection (FBP) [4] algorithm. Lastly, a 3D model can be visualized in external software.

2.1.2 Electrical Impedance Tomography

EIT is a relatively novel imaging method that has significantly evolved over the past few decades in the field of medical and biological imaging since EIT is a non-invasive, non- damaging and fast imaging technique that is safer and cheaper than many other imaging methods such as X-ray imaging [1]. Therefore, EIT has enormous potential to be in good use and play a critical role in clinical diagnosis and monitoring a variety of disease conditions inside the human body. For example, Ethan K. Murphy [7] adopted EIT imaging approach for detecting breast cancer.

Human tissues are composed of cells and thin membranes with a high resistivity which electrically behave like tiny capacitors [8]. The impedance of the biological tissues in- cludes the resistance and reactance. The conductivities of body fluids contributes the resistance information, whereas electrical behaviour of the cell membranes is frequency- dependent and it provides the reactance components. The EIT imaging method is based on gathering these impedance data. The impedance measurement can be made over a wide range of frequencies, typically from 20 Hz to 1 MHz [9]. When high-frequency

(12)

current is injected into the tissue, the current flow passes through the cell membrane.

For this reason, the impedance properties are dependent on the tissue and liquids both inside and outside of the cells. However, at low frequency, the membranes can oppose the flow of current. Therefore, the impedance depends on the applied frequencies.

Bioimpedance of human tissue contains plenty of information regarding the changes in the tissues in different kinds of environments. This information is very helpful in exploring the internal structure of tissues. The bioimpedance measurement indicates the properties of tissue water content but also the characterization of the cells and these properties are normally different depending on the dimension of a cell and the thickness of the membrane. EIT method uses the bioimpedance information to reconstruct images of the content of tissues.

Typically in EIT, electrical current excitations are applied on the surface of the target or sample and voltage is measured with several electrodes. This impedance data is used to reconstruct the conductivity distribution of the target or sample.

Based on the EIT imaging principle, a variety of EIT devices have been made for exper- iments. A typical EIT data collecting system consists of multiple electrodes with a single current source and a multiplexer that is used to inject currents into the electrodes [5]. The voltage is measured between the other electrodes for image reconstruction. For exam- ple, in figure 2.2 shows a cell culture chamber is covered by a circular imaging chip that consists of 16 equally spaced electrodes. However, these devices end up with a common

Figure 2.2. A circular imaging chip that has 16 equally spaced electrodes covers the surface of a cell culture chamber [10].

characteristic that the equally distributed electrodes around the sample occupy a large portion of the target surface. In this sense, it is difficult to increase the quality of data by adding more electrodes and the left area for other sensors becomes limited. Therefore, it is important to have such a system that not only reduces the number of electrodes used but also possibly increase the sets of independent measurements. Recently, a rotational EIT(rEIT)with a modification of limited angle full revolution rotational EIT(LAFR-rEIT) has

(13)

been proposed and investigated in terms of its feasibility [11].

LAFR-rEIT is an innovative imaging method that modifies traditional EIT to be compatible in multimodal setups. The static electrodes only cover limited external areas of the sam- ple chamber and the sample is rotated a full 360to acquire a set of measurements. The advantage of this method is that the number of independent measurements is flexible depending on the rotational angle. The smaller the angle is, the more measurements are obtained. In the meanwhile, fewer electrodes are employed and it creates more space for other imaging methods. In addition, it reduces the complexity of EIT instrumentation. Fig- ure 2.3 shows the novel design of the LAFR-rEIT. In this schematic design of LAFR-rEIT

Figure 2.3. The rotational EIT [11] increases the number of measurements by rotating the sample. It only uses 8 electrodes attached to the surface of a specimen. Therefore, this design leaves much space for other sensors to measure the data.

setup, the eight electrodes remain stationary during the rotation process and it distributes evenly on the inner but opposite side of the container. The electrodes only cover a part of the exterior and the majority of the area are uncovered.

2.2 Virtual Instrument

The measure instrument has experienced a significant shift from the hardware centered towards the software centered. With the development of computer technology, the virtual instrument performance has boosted enormously compared to the first time it released into the market. In the section, it introduces the software architecture of a virtual instru- ment, the widely used interface buses and discusses the advantages and disadvantages of virtual instrument.

(14)

2.2.1 Definition and Principle

Driven by the motivation lowering costs for measurement equipment and boosting the measurement speed, virtual instrument has witnessed fast development for the last few decades. The definition of a virtual instrument is based on a modern computer or a work- station that can provide powerful computation, perform robust data process and display complicated results. Meanwhile, a user could simplify the design, deployment, and us- age of programmable measurement system by utilizing a user-defined, virtual interface to acquire data.

The word "virtual" can be interpreted from two aspects:

1. virtual control panel

The physical control panel in the conventional instrument has been replaced by the software user interface on the computer and similar icons have substituted the button, the switch in term of appearance.

2. invisible measurement capacity

Compare to a traditional instrument, measurement functions in a virtual instrument are achieved by the development of software and it does provide better performance in many perspectives such as data collection and process.

Nowadays, a typical virtual instrument generally contains these necessary components:

sensors, a Data Acquisition (DAQ) [12] device and a computer that is equipped with a programmable software displayed in figure 2.4.

Figure 2.4. The composition of a virtual instrument includes three essential units. The sensors are acquiring the data and pass them to a DAQ device or board [13] where has an ability to process raw data. The processing is divided into two steps: signal conditioning and ADC converting. After that, the data can be sent to a computer via the communication made by drivers. Finally, customized software displays and further handles the measurement result into many forms.

Compared to the hardware-centered instrument, the main difference is that the virtual instrument adopts computers for data process. The computer must have an application software for data computation, representation and a driver software for device communi- cation. Plus, the field buses are used for the data transferring between the computer and

(15)

DAQ devices.

2.2.2 Software Structure

The customized application software that controls the instrument and handle the data is considered as one of the largest innovations in virtual instrument compared traditional instrument. Nowadays, as most application software has similar software design structure to enhance the efficiency of data transmission. Therefore, the structure of the application software from a virtual instrument could be categorized into five critical layers which can be described in a hierarchical order:

• User Interface

• Data Process and Analysis

• Instrument Driver

• Virtual Instrument Software Architecture (VISA)

• Input/Output (I/O) Interface

Figure 2.5. In a virtual instrument, the software structure is classified into five layers.

The top layer is interacting with user actions. Then the software algorithm reacts and processes the event by calling the instrument driver which either commanding or querying to the instrument. Most drivers communicate with the embedded system of the instrument by using VISA [14] via different I/O interfaces.

In figure 2.5, it displays the structure of the application software in the virtual instru- ment. The data transferring between the adjacent layers is bilateral that regulates the data transmission in the majority of virtual instrument systems. The user interface han- dles user events and displays the measured data. The data process & analysis makes use of algorithms for data computation. The third layer, instrument driver connects the

(16)

application software and the specific instrument to receive and send commands. VISA is designed above the I/O interface which solves the device coupling problem regardless of which I/O interface used in the virtual instrument.

Figure 2.6. The VISA API library provides many simple functions to communicate with different devices with various interfaces. It supports serial port interface, GBIP interface, VXI interface and PC bus interface, regardless of device manufacturers.

During the development of the virtual instrument, device coupling was one of the main challenges. In order to promote communication and improve the interoperability between a variety of instruments that are manufactured by various vendors, VISA was developed by VXIplug&play Systems Alliance [15]. VISA is a communication standard for instru- mentation configuring, programming which includes GPIB, VXI/PXI [16], serial or USB interface. VISA can support multi-platforms and multi-type controls. One of the greatest benefits it brings is that it provides simple and easy-use Application Programming Inter- face (API)s for users to establish robust communication with different periphery devices regardless of its I/O interface types. The VISA hierarchy is shown in the figure 2.6 below.

Basically, the sending and receiving data from instruments can be achieved by two API:

Visa Read and Visa Write. The Visa Read API reads the specified number of bytes from the instrument appointed by the Visa resource name and returns the data into ROM buffer. The Visa Write API writes data from the buffer into the device regulated by the Visa resource name. Furthermore, only serial communication requires to configure settings for the port which can be done by Visa configure serial port API. Therefore, it is very easy to configure, program and troubleshoot the whole system by using VISA.

(17)

2.2.3 Interface Bus

The figure 2.6 illustrates the most common I/O interfaces in the virtual instrument. There- fore, there are some popular interconnect buses dominating the industry which are known as:

• Serial Port Bus

• GPIB Bus

• VXI Bus

• PC Bus

This chapter details the background and properties of each I/O bus:

Serial port communication bus based on RS-232 protocol uses a transmitter for data ac- quisition between the instrument and a Personal Computer (PC). It is designed to send data one bit at a time over a single communication channel or a computer bus consec- utively. However, it has limits in terms of data transmission speed and distance(up to 19.2 Kbytes/second, recently 115 Kbytes/second, and 15 meters) [17] and it does not allow more than one device to connect it. Generally, serial communication is welcomed because nowadays most PC has at least one serial port which means it does not need an extra converter to run the system other than a cable.

GPIB bus is the first industry-standard bus for parallel communication and it is also known as IEEE-488 [18]. A typical GPIB system can communicate with maximally 15 instru- ments simultaneously with a GPIB bus. GPIB interface is a 24-pin [19]VMEbus eXten- sions for InstrumentationVMEbus eXtensions for Instrumentation connector which has 8 data lines used to transmit data and messages between devices on the same bus, one byte(8 bits) at a time. It also includes three handshake lines for message transfer. The data rate could be up to 1 Mbytes per second [17].

Generally, a VXI bus is composed of a mainframe substrate on which has a backplane connection for plug-in modules. The VXI bus provides more features on modules, iso- lation, shielding and so on. The communication speed among units based on the VXI standard can be more than 20 Mbytes/second [20]. However, the main disadvantage of this system is that the equipment is very expensive and it is suitable for well-financed teams.

Besides, due to the prosperity of the DAQ device, the PC bus has been widely adopted.

PC buses that include USB and Ethernet cables are commonly used for simple plug-in instrumentation. The main properties of the PC bus are the simplicity of connection and low cost, which makes them perfect for building a small and inexpensive data acquisition system.

(18)

2.2.4 Advantage and Disadvantage of Virtual Instruments

The advantages of a virtual instrument are obvious. For a runnable virtual instrument, the main costs come from two sides: the hardware instrument itself and the PC. The costs of instruments vary from device to device. Usually, modern PC or workstation can satisfy the majority of the requirement for running a virtual device. Therefore, the cost of a virtual instrument setup is predictable. In terms of performance, the virtual instrument has a good data process, display capacity based on the computing power of a computer.

Usually, users can customize the design of software to acquire desired results that also improves the performance.

However, there are still some disadvantages of a virtual instrument. First of all, the virtual instrument consumes a large amount the power. In most cases, the computer and exter- nal devices could run simultaneously for a few months that consistently needs a power supply. In addition, if data is transmitted wirelessly, the security of it cannot be guaranteed and it might be hacked during the transmission.

2.3 Software Design

Software design is a process of envisioning and defining software solutions for one or more sets of problems. It contains many procedures such as abstraction, refinement, software architecture, modularity, front-end UI. However, this chapter focuses on three main points: software architecture, modularization and user interface.

2.3.1 Software Architecture

Software architecture is the highest abstract version of a system that defines the blueprint of a software system. It identifies the software a system which consists of a collection of key components interacting with each other by the connectors that describes the re- lations among them. The software architecture lays a solid foundation for the software and it is costly to modify once implemented. The aims of designing software architecture is to bring the software characteristics such as flexibility, modularity and scalability into the structure solution to fulfil technical requirements of the software. A good software architecture is crucial when developing a software that keeps itself easily maintainable and reduces the costs, avoiding the duplicate codes. It also increases the performance of the software and makes it fault-tolerant.

Architectural design patterns are common, reusable solutions for designing software ar- chitecture. The purpose of the architectural pattern to fit the collection of components together and achieve the message and data flow between each other. Nowadays, for de- signing a data acquisition software architecture [21], the producer-consumer and event-

(19)

driven patterns are the most common solutions.

The producer-consumer pattern originates from Master-Slave Pattern [22] is a classic example of parallelism that multiple tasks are performed synchronously. Figure 2.7 il- lustartes the principle of a multi-producer and multi-consumer design proposed by Ben Stopford. This pattern consists of two main sections: producers and consumers who share a common buffer such as queues [23]. The multiple producers generate data and messages on its own rate and then send these data via a shared memory. In another end, the consumers utilize the data at a different speed and a few common consumers can deplete even multiple data. To connect the producer and consumer, a queue is usually applied to achieve data communication. Queue is an excellent mechanism to pipe the data from producer to consumer which could not only avoid data loss during the trans- mission and keeps a great amount of data in the memory pool at run time. Usually, there are two different way of inserting data. The first one is to append the data element into the end of the queue and it follows the First-In-First-Out (FIFO) [24] principle. The FIFO guarantees the data generated from the producer will be executed in order. The other method is to insert the prioritised data into the front of the queue to assure immediate execution.

Figure 2.7. Multiple producers generate the data own its own speed and multiple con- sumers utilize them at a different rate. The data and messages are transmitted via queues [25].

(20)

Figure 2.8. The publisher notifies a message to multiple subscribers who subscribe to the event when it happens via an event channel. It achieves the communication between decoupled software components [26].

Event-driven pattern is another classical design paradigm based on publish-subscribe model in which the flow of the software is determined by the events. An event is rec- ognized as any important changes in the states of the system hardware or software.

There are two sources of events, internally and externally. The external events mainly stem from user actions such as a keystroke or mouse click. The start or end of the soft- ware execution can be considered as an internal event. Figure 2.8 explains the principle of publish-subscribe mechanism [27] in event-driven pattern. Once an event happens, the event publisher categorizes the event messages and notifies them to the event sub- scribers without knowing the number of specific subscribers or the outcome of the event.

The subscriber triggers the actions only when it receives the message. The event chan- nel is used for transmitting the events between the decoupled software components. The primary advantage of event-driven mechanism is that the triggering and executing of the events are asynchronous without blocking. It also avoids the unnecessary polling for the operating system which saves the CPU resources for other tasks [28].

The producer-consumer provides an excellent solution for data flow inside the software system with the possibility of acquiring and consuming data simultaneously, while the event-driven pattern triggers the communication between each decoupled software com- ponent. As a result, it is possible to make the software scalable, loose-coupled and easy maintained.

2.3.2 Modularization

Modularization is a technique to break down a large software system into multiple in- dependent and discrete modules that are supposed to perform own functionalities inde- pendently. Effective modular design can be accomplished if the separate modules are independently solvable, adjustable as well as compatible.

(21)

Figure 2.9.The main program consists of two modules name M1 and M2 which have own functions. Every module could either interact with the other one or use its own functions to finish certain tasks.

Figure 2.9 shows an example of modularization of software design, the main program consists of two discreet modules named M1 and M2. Every module has limited or none reliance on the other and each has own functions to fulfil certain tasks. In order to accom- plish a system task, each module could interact and cooperate or adopt its own methods.

There are two criteria to assess the extent of modularity for a software: cohesion and coupling. Cohesion describes the strength of the relationship between data and func- tions within a single module. Usually, a high cohesion in the module is recommended because high cohesion is associated with common characteristics of the software and it reduces the duplicate of codes. Coupling measures the strength of the relationship be- tween different modules inside a software. Low coupling tends to be preferable because modules should be as much independent as possible from other modules. Hence, the modification of modules does not affect heavily on the others. All in one, a good modular software should be high-cohesion and low-coupling.

OOP is one of the typical solutions to modularize the software based on the concept of class and object. Class corresponds to either the things in reality or the abstract entities which contains a collection of data and methods. It acts like a blueprint that defines the behaviours of the data and methods. The object is an instance of the class. OOP provides encapsulation for objects, which means the internal representation of data can be hidden from outside of the scope of objects or in case of misuse. Only the own methods of objects can have access to the internal data.

The advantage of software modularization is obvious. Software can be divided into sev-

(22)

eral units regarding functionalities and the individual modules are easier to maintain and modify compared to the whole software system. Components with high-cohesion inside a module could be reused.

2.3.3 User Interface

User interface is the front-end window compared to software architecture design to which users interact to operate the software. Users can control the hardware and software by using the user interface. The user interface is an important part of a software which directly interacts with users and gives users insight into the software. Graphical user interface is one of most common UIs and it provides user graphical ways to interplay with the software. It is believed that the UI should be simple to use, clear to understand and respond to user actions.

A basic graphical UI usually contains a window, a tab, a menu, an icon and cursor. A window is a place where contains all the content of the software. Contents such as tab, file path input, or icons are placed and displayed in a window. The window could be resized, minimized or maximized to flexibly meet users’ requirements. A main window could also contain few child windows such as tabs. A Tabbed Document Interface (TAB) can store multiple independent child windows in the same main window. The selected tab window is considered as the preferred panel to view. This design helps highlights the preferred window as well hides the unused one which saves space on the main window.

For instance, most web-browsers have adopted this design. A menu is a collection of similar type of items which are grouped and placed together in the window. Some menu could be either invisible or invisible according to the workflow of the software. Icon is regarded as the smallest element of the UI such as button or file path input. They can be either clicked or double licked to trigger certain functionalities. Lastly, a cursor is identified as user inputs and it interacts with external devices such as a mouse or touchpad. They are used to manipulate the icons for users.

User interface is a critical part of the software and it determines the user experience directly. Therefore, a good design of the user interface makes a software attractive and simple to use.

(23)

3 RESEARCH METHODOLOGY AND MATERIALS

3.1 The Integration of Imaging Methods

As the previous subsection 2.1.2 mentions that the rotational EIT provides the possibility of combining other imaging methods with it because the number of measurements from rotational EIT does not only rely on the quantity of the electrodes attached. By rotating the sample, it is achievable to get more sets of measurements. Therefore, it is feasible to integrate the OPT imaging method into the rotation EIT [3] .

By combining the OPT system displayed in figure 2.1 with the rotational EIT proposal, the schematic layout of hybrid imaging setup for is illustrated in the figure 3.1.

Figure 3.1. Schematic setup of hybrid imaging methods: the setup of the novel hybrid imaging combines EIT and OPT. This setup supports bright-field OPT with LED(a) and fluorescent OPT with LED(d). The EIT mode can be achieved with spectro-EIT device(c).

The camera(b) takes frames for OPT mode. The sample S is rotated during the data acquisition process.

In this optical hardware setup, the eight electrodes for EIT imaging are attached on the opposite walls of the sample chamber and the uncovered surface creates free space

(24)

Figure 3.2. There are three main components that need to be well controlled by the integrated software for in-house built setup: EIT device (1), camera (3) and sample ma- nipulation platform (2) which has four motorized step motors. The EIT device and step motors are linked via USB buses, the camera is connected by a camera link.

for the OPT imaging for the purpose of simultaneous OPT and EIT imaging. In figure 3.1, OPT contains a LED light source, a camera for capturing the data and in between there is a sample manipulation platform which can hold and rotate the sample (S). By using different light sources, OPT provides two different imaging modes: bright field and fluorescent OPT. In this setup, the bright field OPT uses the LED (a), and the other one utilizes LED (b). If EIT is used, the EIT device is connected with the sample by eight electrodes, and these eight electrodes are distributed evenly on the opposite side of the sample container. While both EIT and OPT methods are used, depending on which OPT mode is used, the correspondent LED light can be manually configured. This imaging hardware setup provides options and flexibility to the researchers when they need data and it also enables multimodal imaging.

Therefore, to fulfil the functionalities for the integrated imaging setup, the integrated soft- ware should control the EIT device, the camera and sample manipulation platform pliably with a good software architecture design. Figure 3.2 shows the in-house built setup for multi-modal imaging methods. In this figure, three main components that need to be well controlled by the integrated software are marked from the left to the right. The number 1 device is the EIT device with USB cable connection to PC. The sample manipulation platform marked with number 2 consists of four motorized step motors and it is controlled via USB cables. The number 3 device is the camera with a camera link connection.

(25)

3.2 Software Workflow and Algorithm Design

The workflow and algorithm structure of the integrated software was designed in figure 3.3.

Figure 3.3. The algorithm design of the integrated software

First of all, the camera and motors on the sample manipulation platform should be con- nected and powered on before the measurements. If EIT device is needed, the EIT de- vice also must be powered on and connected to the PC. After each device is linked to the PC physically, the communication address is registered on the PC. When the integrated software runs, it will establish the communication channel and control to these devices by calling the respondent LabVIEW drivers via these registered addresses. Because sample alignment is necessary before any of the imaging methods starts and it always requires the camera and sample manipulation platform, therefore, the integrated software always connects to the camera and step motors. Once the connection is established, the camera preview updates real-time views from the camera constantly and the integrated software

(26)

UI will simply guide users through the operations for any data collection. The integrated software manipulates the visibility of UI items depending on the actions from users to control the workflow and it also makes the UI look more simple.

Meanwhile, the integrated software displays the main page firstly on purpose when it starts. The data saving path must be given before the integrated software displays the options for choosing the imaging modes. Otherwise, the user is not allowed to proceed to the next step. Then it comes to the sample alignment where the sample should be centred and placed around the focal plane. The sample on the sample manipulation platform has to be aligned properly with the help of the camera preview. The integrated software enables users to control the motors freely, adjusting either the movement distance for linear motors or rotational angles for the rotational motor. Users can move the sample in 3D space along the XYZ axes and rotate it clockwise. After the folder path is chosen and the sample is aligned, users have three options to choose: OPT, EIT, OPT/EIT. When either EIT or OPT/EIT is selected, a trigger for connecting the EIT device will be shown and the device will be controlled by clicking the trigger as well.

Then, the integrated software guides users to configure each device settings according to the mode was chosen. It is important to configure devices carefully with decent values to make a more precise measurement. In OPT mode, the user has to configure the binning value and exposure time of the camera for acquiring better image data since these values determine the quality of the images in OPT. In EIT mode, the signal voltage, frequencies, gains and step period are important for the EIT device to collect desired and less noise data. If OPT/EIT mode is chosen, then the user has to configure the camera and EIT device both well. The user also must give rotational angles for either EIT or OPT or OPT/EIT which is also dependent on the mode selected. The rotational angle determines the number of data sets to obtain. Therefore, the angle must be entered precisely. In OPT or EIT mode, the value 360 must be divisible by the rotational angle. In OPT/EIT mode, the value 360 should be divisible by both OPT and EIT angles. Additionally, the EIT angle should be a multiple of the OPT angle. Plus, the integrated software allows the user to input other useful information such as user name, sample info and extra notes for future reviews. When all is set, the user must click thestart button to trigger the measurement process. At the same time, a report in the form of Excel is created recording the sample information and other helpful parameters such as rotational angle, data acquisition timestamp and so on.

For either EIT or OPT mode during rotational measurement phase, the imaging proce- dures could be divided into three steps: data collection, data display and data saving.

The data collection is followed by data display and saving. Considering that the size of each OPT image is around 8 MB and only displaying and saving the images takes few seconds, so the image display and saving must be handled asynchronously to re- duce the time costs for OPT mode which means the next image collection operation does not need to wait till the image display and saving finish. However, the EIT data collec- tion usually takes longer time because the EIT imaging data includes several tetrapolar

(27)

measurements. The time costs for tetrapolar measurements depends on the number of measurements and measurement time. Hence, to guarantee the quality of EIT imaging data, the rotational motor has to wait until the data is collected and saved.

In OPT mode, the camera will take one image at a time once rotation stops. In EIT mode, at each data collection point, the integrated software collects data and saves it.

In OPT/EIT mode, the data collection positions are divided into three types: OPT data collection position and OPT/EIT data collection position. At OPT data collection position, the integrated software acquires only OPT imaging data. At the OPT/EIT data collection position, it gathers both OPT and EIT data concurrently.

In OPT mode, the integrated software could immediately check if a full 360 rotation has done right after the data is captured. However, in EIT or OPT/EIT mode, it has to ensure the EIT data is acquired and saved properly first. If a full 360rotation is not finished yet, the integrated software triggers the motor to rotate and it does data acquisition again.

Otherwise, the rotational motor stops and the program waits for the next data acquisition process. At the same time, the OPT image will be updated on the camera preview and saved into a TIFF file. The EIT data will be displayed on the graphs in the EIT UI page and saved into a text file. Hence, the EIT and OPT data are the processed, displayed and saved asynchronously without blocking the rotational motor movement.

3.3 The Key Components and Techniques For The System

This section clarifies the key hardware components used in the project and it explains the reason why G programming language outstands among many other graphical program- ming languages.

3.3.1 Programming Language and Development Software

G language which was first time released in 1986 by National Instrument [29] is a graph- ical programming language based on data-flow principle and it uses icons to build appli- cation software instead of lines of text. In G language, every single routine or function is stored as a Virtual Instrument (VI) which has three main components: a block diagram, a front panel and a connector panel.

A front panel is generally a container that contains the graphical representations of con- trols and indicators of a VI and displays them during the run time. A block diagram is where the code can be designed and exhibited graphically. Besides, the connector panel is just the collection of terminals that links to the controls and indicators of that VI. It serves as the interface when it is embedded as a sub-VI.

Due to its distinctive graphical representation and programmatic syntax, G language has been widely accepted in many fields such as automation engineering and biomedical en-

(28)

gineering that usually require data acquisition applications. In the meanwhile, Laboratory Virtual Instrument Engineering Workbench (LabVIEW) supports parallelism programming which means multiple while loops could execute in parallel. With all the advantages it pro- vides, it is rather simple to build a modular integrated data acquisition system.

During the project implementation, the main development software used was LabVIEW 2014 (National Instruments Corporation, Austin, Texas, United States). LabVIEW 2014 version was chosen because it supports the third- party libraries provided by device man- ufacturers. For instance, LabVIEW 2014 could easily integrate the driver of camera and motors into the current development environment.

3.3.2 Computer

The PC used during the making of the thesis was a Dell tabletop equipped with a Win- dows 7 Enterprise 64-bit operating system. The computer has an Intel Core i7-3770 Central Processing Unit (CPU) which runs at 3,40 GHz. The installed memory (Random- Access Memory (RAM)) is 32,0 GB. The configuration of the computer must be high-level because, during the data acquisition process, it has to do sophisticated computation and image processing. The image data gathered from OPT is quite large which usually occu- pies 3.125 GB space and the EIT data acquisition process requires complex computation.

Therefore, the whole system needs a powerful computer to handle all the computation and data representation.

3.3.3 Camera

The camera was selected to be Hamamatsu Orca-Flash4.0 V2 digital Complementary Metal Oxide Semiconductor (CMOS) camera from Hamamatsu Photonics, Japan. This camera has a very good image resolution (2048x2048) and each pixel size is 6.5 µm. The readout speed of the camera is also fast which is 30 frames per second. This camera also has a modular video-capture driver library which can be supported by LabVIEW 2014.

The camera driver has many functions to configure the camera and capture images that saves the software development time.

The original OPT software developed by MSc Ana Soto already supported many helpful functions for this specific-type camera. With the original OPT software, it was possi- ble to update the real-time camera preview smoothly and do the pixel analysis for im- age preview. The configurations of camera settings such as binning value and exposure time are also taken care of by original OPT software. Most importantly, it included well- implemented OPT imaging and data saving functions.

(29)

3.3.4 EIT Measurement Device

The EIT instrument is a novel spectro-tomography [5] device that is composed of two essential units as figure 3.4 shows.

Figure 3.4. The EIT device is composed of two united parts: a functional multiplexer which receives and sends the data bidirectionally and a spectroscopy unit which pro- cesses the signal and handles discrete Fourier transform.

From figure 3.4, we can see that the right part is the main part: QUADRA spectroscopy unit which is designed depending on the Digital Signal Processor (DSP). Inside the device, there is a Multiply-and-Accumulate (MAC) processor for realizing the discrete Fourier transform. Besides,the most critical functionality of this unit is that it supports real-time spectra analysis every one millisecond for 15 different frequencies [5].

The left part is a functional multiplexer and in the front of itself, the 16 pins[5] are built for transferring both the excitation current and response voltage bi-directionally.

The novel EIT device is straightforward to operate. It is connected to the PC via USB cable and by plugging into the PC USB port, the device is ready to use. The device already has the original EIT software developed by R.Land (Tallinn University of Technology, Estonia) to use and the original EIT software performs well. Before making the data collection, signal voltages, gains, measurement table and step period needs to be configured. The original EIT software provides the options to acquire the different sizes of the data for one measurement depending on the data length selected. It also supports two various modes for data acquisition: the single mode and continuous mode. The single mode means the imaging data acquisition stops after a number of tetrapolar measurements are done and the number of tetrapolar measurements is chosen by users. While the continuous mode enables the device to repeatedly collect the data and display it on UI at the same time.

During the measurement, users are able to modify the frequencies displayed by clicking and unclicking the frequencies options from the original EIT software. The data display is obvious and informative. It shows the real and imaginary parts of the EIT measured data on the graphs from different frequency signals. The update of the data is responsive

(30)

and different frequency is coloured in different colours. The data saving function of the original EIT is also handy which saves all the raw data into a text file. The original EIT device also supports a repeated saving function that means the software records EIT data multiple times when data acquisition is in continuous mode. The EIT software also supports modifying the number of tetrapolar measurements by loading predefined excel files that specify the measurement details to make measurement flexibility.

3.3.5 Interface Buses

The Hamamatsu CMOS camera supports the camera link [30] interface which is also a serial communication standard but mainly designed for industrial video products such as cameras. With camera link interface, the camera readout performance is excellent which could transmit up to 1000 high-resolution frames per second. The EIT device has a USB 3.0 port for connecting with a PC. Plus, all the motorized step motors are linked to the PC via USB cables. The Dell desktop can support these two I/O bus connections and there is no need for extra converters to build up the connection.

3.3.6 Sample Manipulation Platform

This main support platform has been made for sample fixture and movement. The plat- form includes four step motors (Standa Ltd., Vilnius, Lithuania): three motors for linear movement and one motor for rotational movement. All these four step motors can be manually adjusted.

In figure 3.5, all four motors are marked with red integers from 1 to 4. Motor No. 3is the rotational motor and it is installed on a rotational stage. Motor No.1,2,4are linear motors which can perform a movement along X-Y-Z axes in 3D space.

To be more specific, the design of the rotational motor is displayed in figure 3.6 where we can observe that the rotational motor controls the cylindrical cap where a steel bar is attached. The end of the steel bar is used for sample attachment. In the centre of the top cylindrical cap, there is a hole where provides the possibility of holding a steel bar that is used to attach samples.

The original OPT software also includes the movement functionalities of the motorized step motors. It supports the linear motors move the sample along XYZ axes in 3D space and the rotational motor rotates the sample clockwise as well. The movement distance of the linear motors are also adjustable and the speed of all motors is 1500 steps per second constantly.

(31)

Figure 3.5. The main support platform where the sample is fixed and rotated contains four motorized step motors. The No. 1, 2 and 4 are step motors and they do the linear movement of a sample along the X-Y-Z axes respectively during the sample alignment.

On the top of the platform, a rotational motor (No.3) spins the sample for a full 360 rotation and the sample is attached on the steel tube which is tightened in the cylinder cap.

Figure 3.6. Rotational Motor Design

The rotational motor has a high accuracy which can achieve 0.01 movement at a single step. During the data acquisition process, a sample can be fixed via the metal bar going through a central hole of the brass cap.

(32)

3.4 Software Implementation

Based on the requirement of the integrated software is that it must be modular, loose- coupled and extensible for future development, therefore, we must take the greatest ad- vantages of the necessary software architecture design patterns and conform them into the integrated software.

3.4.1 The Architecture of Framework

A fundamental, proper-designed structure could be proposed to build the blueprint of the integrated software. Based on the essence that G programming is data-flow program- ming, the main architecture can be divided into three parts from left to right: initialization, run and close. The first part is named as initialization which creates a temporary mem- ory pool for the software such as creating queues and notifiers. Then, it comes to the main body of the software named as run that contains multiple producer-consumer loops and the main body manages all the tasks and computation of the data. The right part is called close which clears up the software while exiting such as free up the memories that queues and notifiers had and shut it down.

Figure 3.7 and figure 3.8 illustrate that the run part has a top-down pattern and contains seven important while loops. Figure 3.7 shows that the upper body of the run section has an event listener loop, a message queue handler, a motor handler, a camera handler and an image data handler. The event listen loop and message queue handler is meant to handle user events. The motor handler fulfils motor functionalities. The camera handler is for handling all the camera functionalities and the image data handler is to display image preview and save images into the correct format for OPT mode.

(33)

Figure 3.7. The upper body of the run section has an event listener loop, a message queue handler, a motor handler, a camera handler and an image data handler.

Figure 3.8 shows that the bottom part of the run contains an EIT device handler con- trolling the EIT device and an EIT data handler. The EIT data handler is for processing, presenting EIT data or saving the data the correct format for EIT mode.

(34)

Figure 3.8. The bottom body of the run section has an EIT device handler and an EIT data handler.

3.4.2 The Implementation of Queue

In the project, the types of data acquired from OPT and EIT were different. However, both data was transmitted via the same enqueue and dequeue functions from VISA library by queues. Therefore, for the purpose of more flexible data transmission, a decent queue data type must be selected (figure 3.9), which was suitable for all different data. The queue data type was a cluster containing a string named message and a variant called message data which was a generic container for all other data types. This was known as one solution for all situations in LabVIEW.

(35)

Figure 3.9. The queue data type was a cluster which contains a string-type message and a generic variant for message data that can be converted into any data type later.

Inside the producer-consumer loops, the enqueue and dequeue VIs must be wrapped properly by using the self-defined queue data type. Figure 3.10 and Figure 3.11 shows the customized enqueue functions which have the same data type with different enqueue priority. The difference comes from the boolean value - Priority Message?(F). If it is true, a special enqueue function will be called which inserts the data into the front of it. While it is false by default, it acts as a normal enqueue function which follows FIFO principle.

The message queue reference defines the destination of the data.

Figure 3.10. The normal enqueue function executes when the Priority Message value is false. It needs a queue reference and a message as inputs and puts the data into the queue.

(36)

Figure 3.11. The emergent enqueue function executes when the Priority Message value is true. It needs a queue reference and a message as inputs and insert the data into the front of a queue.

The dequeue VI is shown in figure 3.12 and it receives the data from the specific queue appointed by the message queue reference. Then the dequeue function parses and un- bundles incoming data into a message and message data. The string message ensure which state of the state machine should be executed and the message data will be con- sumed in that case.

Figure 3.12. The dequeue function executes when the queue state is normal and is not empty. It receives the data cluster and parses it into message and message data to be consumed in the state machine design.

Based on the well-formed architecture shown in figure 3.9, the producer can send com- mand and data simultaneously to the consumer to perform certain tasks. However, the disadvantage is that the communication is unidirectional, which is only from a producer to a consumer. However, the shortcoming was overcome by adding an extra state ma- chine and reserving queue memory buffer for a producer which acts as a consumer loop as well. This mechanism ensures that both producer and consumer can communicate with each other. Importantly, in the whole project, only one enqueue and one dequeue function was used. In order to avoid race condition [31] while executing, both VIs must be set to non-reentrant execution

(37)

3.4.3 UI Structure Design

Responding to user interface events quickly is essential and the most convenient way to conduct user interface events is by using the producer-consumer model. However, in order to keep track of all the events without exhausting too much CPU resources, an event structure was implemented inside the producer loop. The event structure monitors a significant state change for an event source and once it happens, it catches the event and forwards it to the consumer loop to complete a corresponding task. The UI structure is placed at the top of the top-down main body design which is displayed in figure 3.13.

Once a user performs actions on the graphical user interface, one specific event will be trigged and the user command will be executed in the consumer loop which is called as Queued Message Handler (QMH) [32]. The QMH is considered as the message hub of the whole system which receives all the user instructions and external inputs.

Then according to the property of the tasks, the instructions will be distributed to different executor by the QMH.

Figure 3.13. The queued message handler is responsible for handling all the UI events.

The Event listener on the top subscribes all the user actions like 90rotation button lick.

When the event is fired, it wraps the message and message data sending to the QMH loop to handle. The QMH then sends the task to the exact task handler to process it.

The dynamic event registration is adopted to shut down the event producer loop. The dynamic event subscriber is set in the event structure and the event generator is inside the consumer which can turn on or turn off the event dynamically. With this design pat- tern, users can define own events without external user actions and the events could be

(38)

fired much more flexible. Additionally, the Functional Global Variable (FGV) and global variable are chosen to keep the value of an event source which allows accessing its value anywhere in the application.

3.4.4 The Implementation for Motors and Camera

The motors and camera are standalone devices and they have own properties, fulfil own functions independently without any reliance from other hardware components. There- fore, the motor and camera were implemented with the concept of OOP. Both devices have standard functioning LabVIEW drivers provided by manufacturers. The drivers are very modular and compatible with LabVIEW 2014 as well. By using VISA, the connection between devices and PC can be easily established.

In the project, there are three linear motors and one rotational motor controlled by the computer via VISA. By iterating the VISA address array, each motor can be easily pointed.

Due to this property, we abstract all four motors into a common, generic class named Motor which provides the movements along with different directions.

Motor functions are used at the alignment and imaging phases. For the sample align- ment, all the motors need to be appropriately used to align and move the sample per- fectly. However, during the data acquisition phase, only the rotation motor is working and in order to avoid additional movement because of the acceleration and deceleration, the rotation motor finishes each angular rotation every one second. For alignment, the rota- tion motor calculates the destination and moves in a default speed which is 1500 steps per second. However, in the data acquisition process, the motor moves in a different way.

For all modes, each rotational movement takes always one second regardless of data acquisition angle because the speed changes linearly depending on the rotational angle.

In case of the rotation speed is too fast to damage the sample, the maximal speed is 500 steps per second and it is not changeable for users. This could guarantee that during the rotation, the computer does not need to do polling to check if the motor has stopped which saves much CPU power.

In order to integrate the motor into the system and achieve the functions, the public VIs of Motor Class need to be reused to build a related task inside a state machine. The combination of OOP and state machine make sure that the specific method of each object is detached with the workflow design. Even if you redesign the workflow shown in figure 3.3, the motor methods are not affected at all. As figure 3.14 displays, there is a data queue created for the motor to receive messages from other parts of the system. Inside the state machine, there are cases which can perform the motor functionalities such as rotational, up-down movements. Additionally, the motors are not data generators, hence, there is no need to have a producer-consumer pattern.

For the camera, all the methods it has needs to be defined in the Camera class first. Then those VIs are integrated and implemented in a state machine according to the workflow

(39)

Figure 3.14. The motor handler receives the task from its own queue and dequeue functions to get the message and data. Then according to the message information, it executes the task in its state machine by calling motor functions.

of a camera. When the camera is working, it captures frames every per second and these images need to be either saved or represented to the user, thus, a producer-consumer architecture has to be chosen (figure 3.15).

Figure 3.15. The producer-consumer loop is applied for the camera. The camera ac- quires high-resolution image which are around 8 MB each. Because the displaying and saving images take a few seconds, the camera data must be processed asynchronously in the consumer loop in order to reduce the data acquisition time.

In the producer loop, the camera methods are called to complete certain duties. In order to properly use the camera without damaging itself, the camera has to follow a fixed sequence of the workflow. It begins with checking if the camera is on and connected to the PC. Then, it has to be configured with correct parameters and proper capturing mode.

The camera driver for capturing images provides two different modes: sequence mode and snap mode. Sequence mode means the camera is able to take at least 3 frames per second and snap denotes it take one frame every second. In the sample alignment phase, in order to visualize the real-time position of the sample inside the chamber, the

(40)

sequence mode is configured for the camera. However, in OPT or OPT/EIT mode, the camera is set with snap mode to avoid duplicate images. After it, the camera starts to prepare itself for imaging. When the camera is ready, it waits for the frames and takes it. Before the camera quits, it has to stop the capture and unprepare itself. Importantly, if the camera is going to be configured again, it has to exit the capture mode, unprepared and reset its configuration data, otherwise, the camera will throw an error and LabVIEW software crashes.

Consider that each image is about 8 MB and LabVIEW is based on data flow principle, if the data gets larger, the time costs for data transmission gets longer and the queue memory space is not enough either. Therefore, instead of pass the image itself, the reference of the image in the memory is transferred by using data value reference. In the consumer loop, it dereferences the reference and finds the image. Compared to the size of an image, the size of reference is only 16 bytes, hence, the data transmitting rate is dramatically improved and very less memory in RAM is consumed to store the images temporarily.

(41)

3.4.5 The Implementation of EIT Device

Since original EIT software is made specifically for the EIT device and all the drivers are only applied for it, the integration of both imaging methods should not damage the functionalities of the EIT device. For that reason, we determined that the code would remain intact while integrating. Considering the EIT device acquires and displays data at different rates, so the producer-consumer is chosen for integrating the original EIT software as shown in figure 3.16.

Figure 3.16. The EIT device needs to constantly acquire the data and display it on the UI. Thus, a producer and consumer pattern is the best solution for it. The consumer loop can take care of the EIT data at its own rate.

The integration of EIT part is divided into two steps. First, EIT UI design should be combined into the OPT UI by fitting into the same tab control but at different page in LabVIEW and all the user interaction events would be handled by the same main UI producer-consumer structure. The original EIT software does not adopt the producer- consumer design pattern and the codes are mostly developed in one VI. Therefore, we

Viittaukset

LIITTYVÄT TIEDOSTOT

Operative data and outcome in cases of laparoscopic and open cholecystectomy in diabetic patients (data reproduced from the original publication II with the permission of the

Myös siksi tavoitetarkastelu on merkittävää. Testit, staattiset analyysit ja katselmukset voivat tietyissä tapauksissa olla täysin riittäviä. Keskeisimpänä tavoitteena

Or, if you previously clicked the data browser button, click the data format you want and click Download from the popup window.. Eurostat (European Statistical Office) is

By clicking Data, you can browse and upload your datasets, Tools lead you to many sections that are for example list of geospatial software, Community has information about news

You are now connected to the server belonging to Tilastokeskus (Statistics Finland). On the left you will find several tabs, click on the tab: "layer preview".. 2) Choose

3) Click “Download zip file” write your email-address where you want the download link to be sent.. The download link will appear to your

After you have chosen the year, theme and map sheets, click Go to Download…. New window opens where you can write the email address where link to data is send. Read and accept

Indeed, while strongly criticized by human rights organizations, the refugee deal with Turkey is seen by member states as one of the EU’s main foreign poli- cy achievements of