• Ei tuloksia

Development of a measuring-control system for energy monitoring in low power

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Development of a measuring-control system for energy monitoring in low power"

Copied!
52
0
0

Kokoteksti

(1)

Lappeenranta University of Technology Faculty of Technology Management

Degree Program in Information Technology

Joel Kurola

DEVELOPMENT OF A MEASURING-CONTROL SYSTEM FOR ENERGY MONITORING IN LOW POWER

Examiners: Professor Heikki Kälviäinen

Associated Professor Vyacheslav Potekhin

Supervisor: Associated Professor Vyacheslav Potekhin

(2)

ii

TIIVISTELMÄ

Lappeenrannan teknillinen yliopisto Teknistaloudellinen tiedekunta Tietotekniikan koulutusohjelma

Joel Kurola

Ohjausjärjestelmän kehittäminen energian tarkkailuun matalajänniteverkossa

Diplomityö

2012

52 sivua 17 kuvaa, 1 taulukko

Työn tarkastajat: Professori Heikki Kälviäinen Professori Vyacheslav Potekhin

Hakusanat: älykäs sähköverkko, optimointi, päätöksenteko, hallintajärjestelmä Keywords: intelligent energy grid, optimization, decision making, control system

Tässä diplomityössä esitellään matalajänniteverkkoon tarkoitettu ohjausjärjestelmä, joka on toteutettu käyttämällä moni-agentti lähestymistapaa. Kyseinen lähestymistapa tekee järjestelmästä mukautuvan ja helposti laajennettavan tulevien tarpeiden mukaan.

Ohjausjärjestelmä kykenee ennustamaan tulevaa energiantarvetta sekä tekemään päätöksiä ilman vuorovaikutusta ihmisten kanssa. Työ tehdään osana Pietarin teknillisisen yliopiston projektia, jonka tarkoituksena on luoda älyverkko yliopiston omaan käyttöön. Konseptina älyverkko on mielenkiintoinen myös käyttäjän näkökulmasta, sillä se tarjoaa uusia mahdollisuuksia oman energiankulutuksen kontrollointiin ja säästöön. Älyverkko mahdollistaa energian kulutuksen reaaliaikaisen seurannan ja sitä myöten mukautumisen sen hetkiseen tilanteeseen. Älyverkon avulla on myös mahdollista käyttää entistä helpommin uusiutuvia energianlähteitä osana energiantuotantoa ja käyttäjillä on myös mahdollisuus myydä ylimääräinen energia takaisin sähköverkkoon.

(3)

iii

ABSTRACT

Lappeenranta University of Technology Faculty of Technology Management

Degree Program in Information Technology

Joel Kurola

Development of a measuring-control system for energy monitoring in low power

Master’s Thesis

52 pages, 17 figures, 1 table

Examiners: Professor Heikki Kälviäinen

Associated professor Vyacheslav Potekhin

Keywords: intelligent energy grid, optimization, decision making, control system

In this thesis a control system for an intelligent low voltage energy grid is presented, focusing on the control system created by using a multi-agent approach which makes it versatile and easy to expand according to the future needs. The control system is capable of forecasting the future energy consumption and decisions making on its own without human interaction when countering problems. The control system is a part of the St. Petersburg State Polytechnic University’s smart grid project that aims to create a smart grid for the university’s own use. The concept of the smart grid is interesting also for the consumers as it brings new possibilities to control own energy consumption and to save money. Smart grids makes it possible to monitor the energy consumption in real-time and to change own habits to save money. The intelligent grid also brings possibilities to integrate the renewable energy sources to the global or the local energy production much better than the current systems. Consumers can also sell their extra power to the global grid if they want.

(4)

iv

ACKNOWLEDGEMENTS

This thesis was done as part of a smart grid project of St. Petersburg State Polytechnic University. Thesis is also part of my education in Lappeenranta University of Technology. A want to thank all the members of my research group who worked with this project and of course my professors Kälviäinen and Potekhin, who provided guidelines to finish this work.

In St. Petersburg, Russia 30.04.2012 Joel Kurola

(5)

1

CONTENTS

1 INTRODUCTION ... 4

1.1 BACKGROUND ... 4

1.2 OBJECTIVES AND RESTRICTIONS ... 4

1.3 STRUCTURE OF THE THESIS ... 5

2 SMART GRID ... 6

2.1 SMART GRID PROJECT ... 6

2.2 INTELLIGENT ENERGY GRID ... 8

3 METHODS ... 10

3.1 DATA MINING AND CLUSTERING ... 10

3.2 CLASSIFICATION ... 12

3.2.1 Neural Network ... 13

3.2.2 Probability density function ... 14

3.3 DECISION MAKING IN SMART GRID ... 16

3.3.1 Decision making theory ... 16

3.3.2 Rule engines ... 19

4 SYSTEM STRUCTURE ... 23

4.1 MULTI-AGENT SYSTEM ... 24

5 COMMUNICATION ... 28

5.1 COMMUNICATION BETWEEN DEVICES ... 28

5.2 COMMUNICATION BETWEEN LAYERS ... 29

6 CONTROL SYSTEM ... 31

6.1 AGENTS ... 31

6.2 DATA MINING ... 32

6.3 FORECASTING ... 34

6.4 DECISION MAKING ... 35

7 DISCUSSION AND FUTURE WORK ... 38

(6)

2

7.1 COMMUNICATION ... 38

7.2 DATA MINING ... 40

7.3 DECISION MAKING ... 41

7.4 RELIABILITY ... 41

7.5 VISUALIZATION ... 42

7.6 DISTRIBUTED ENERGY PRODUCTION ... 43

7.7 CLIENT CONNECTION ... 44

8 CONCLUSION ... 45

REFERENCES ... 46

(7)

3

LIST OF SYMBOLS AND ABBREVIATIONS

ANN Artificial Neural Network

ARIMA Autoregressive Integrated Moving Average BAG Battery Agent

DB Database

DM Data Mining

ECMWF European Centre for Medium-Range Weather Forecast ERP Enterprise Resource Planning

GLM Generalized linear model GPA Green Power Generation Agent GTA Gas-Turbine Agent

MM Markovs Model

LAG Load Agent

LAN Local Area Network LMS Least-squares-method

MADM Multi-Attribute Decision making

MFNN Multi-layer Feed-forward Neural Network MLP Multilayer Perceptron

NN Neural Network

OLS Ordinary Least Square PDF Probability Density Function RNN Recurrent Neural Network

RRMSE Relative Root Mean Square Error

SCADA Supervisory Control and Data Acquisition SGC Smart Grid Controller Agent

SPBSTU St. Petersburg State Polytechnic University SAP Systems Application Product

WCF Windows Communication Foundation

WiMAX Worldwide Interoperability for Microwave Access WLAN Wireless Local Area Network

(8)

4

1 INTRODUCTION

In this chapter the background, the goals and the possible restrictions of this work are presented. Also the structure of the thesis and the contents of the following chapters are considered.

1.1 Background

This thesis is a part of a project hosted by St. Petersburg State Polytechnic University (SPBSTU) [1] with African Systems Applications Product (SAP). The goal of the project is to create an intelligent control system for forecasting and optimizing the electricity consumption in households and companies. This thesis presents the software for controls system, which can used to analyze the data gathered from different locations and making assumptions from it. As this is only one part of the project it would not work without the other components created by rest of the group who were taking part to this project.

The software presented in this thesis uses the research results of other persons taking part to this project and the solutions they have created. The main goal of this project is to provide customers means to supervise their own electricity consumption and scheduling it to decrease the costs and, of course, to balance the overall electricity consumption to optimize the efficiency of the electricity grid.

1.2 Objectives and restrictions

The goal of this thesis is to create a control system using a multi-agent approach. The agents of the systems are able to forecast the future energy consumption and to make decisions using the expert system connected to the control system. This thesis does not present the final code of the agents, but the structure of the control system and the functionality of the agents.

(9)

5

This thesis presents how to create a control system for a smart grid using a multi-agent approach and what functionality each of the agents should have. Thesis also presents how to use the collected data to create new rules for the rule base and how to cluster the gathered information and to find new connection or information from the data.

Thesis does not consider how to communicate between agents and other devices in the smart grid, but in the thesis is thought how to handle the communication between individual agents. In this thesis some mathematical formulas are presented, but they are not created in this work. The formulas presents well know methods that are used in this thesis.

1.3 Structure of the thesis

In the next chapters a deeper look to the project is taken, which this thesis is part of, and the results and conclusions that can be made. Chapter 2 includes the backgrounds of the project and its objectives. Also the concept of an intelligent energy grid is considered.

In Chapter 3 the methods used in this work are considered with the mathematical models and the methods of decision making that can help be helpful to achieve the goals.

The communications between devices are discussed in Chapter 5. As the system planned as a part of this project has multiple layers and several devices in each of the layers the software has to able to communicate with other devices and the communication between layers has to be reliable. In Chapter 5 the methods to achieve these goals are considered.

Chapter 6 considers the control system that is the main element of this thesis. Its functionality and the way it is build is discussed. After this, in Chapter 7, a though of how well the objectives given to this work were followed is taken and how the software could be improved in the future. Finally the conclusions are given in Chapter 8.

(10)

6

2 SMART GRID

In this chapter the background of the project which this thesis is also part of is considered. In chapter also the intelligent energy grids which are important part of this work are discussed.

2.1 Smart grid project

The objectives of the project are to create an intelligent system for optimizing the energy grid and to monitor the electricity consumption in companies and households.

As a ground for these goals a report [1] about the use of possible methods to monitor the electricity consumption, the possibilities to create a distributed system for monitoring consumption and making decisions and using existing Enterprise resource planning (ERP) systems as a part of the final system had been made. [1]

According to the report [1] even a small thing, like monitoring the electricity consumption, effect when it comes to optimizing and minimizing the costs. The continuous and precise monitoring of the consumption makes it easier to react fast to the changes in the electricity prices and the consumption amounts. It also helps to recognize the possible problems in the network and to check the effectively of the changes made in the network. For companies and households monitoring the consumption makes it possible to set the goals to the methods meant to decrease the energy consumption and to monitor the results of these methods.

For the structure of the system the report [1] proposed a five layer architecture, where the first layer includes the households and the companies from which the data will be collected and the second layer consists of the hardware that would collect the data. The third and the fourth layers are quite similar in a sense that both collects the data coming from the lower layers and would make an assessment of the quality and the reliability of the data. The fifth and the final layer gather data from the fourth layer processing it and

(11)

7

making decisions according to the analysis of the data. The methods used to make the decisions can be anything from data mining (DM) to neural networks (NN) as data mining makes is possible to find new rules and to cluster them and NN to find the correct rule for given attributes, but of those are spoken more in Chapter 3. The structure of the system is presented in Figure 1.

Figure 1. Structure of the system. [1]

(12)

8

The possible use of the existing Enterprise Resource Planning (ERP) system as a part of the whole intelligent energy grid is considered as a good idea in the report [1], but at least at the moment there are some implementing problems. These problems are mainly caused by the lack of modules that are meant for monitoring the state of energy consumption and transfer. This makes it difficult to integrate the existing systems to work with each other. However the ERP systems provide a wide variety of functionalities that are useful also for companies in energy production.

2.2 Intelligent energy grid

The objectives of an intelligent energy grid, or a smart grid, are to improve the efficiency of energy consumption and to provide more advanced methods to solve the problems detected in the grid [2]. In many ways the smart grid is only an upgraded version of our current electricity network. For making decisions the networks requires data that can be gathered even with current devices that can be read from distance. After this only the software that makes the decisions according to the data is needed. The possibilities provided by smart grid can be seen in Figure 2.

Figure 2. Possibilities of smart grid [2].

(13)

9

Bringing intelligence to the energy grid benefits anyone who depends on the availability of electricity. The energy grid that consists of distance readable metering devices that have almost live information about the need of the electricity and can target the electricity production to the spikes much better and the grid can prepare itself better to possible problems. The system can fix the problems it has detected by alerting a repair group that is closest to the problem or by increasing electricity productions if that seems to be a viable solution. [2] This way power failure can be avoided better than earlier or at least the problems can be detected and repaired faster.

The electricity production can be more ecological that it is now if the use of renewable energy sources is increased, like the wind and the solar power. These forms of energy are really depended of the weather conditions and are not, at least at the moment, very good primary sources of energy. Using the smart grid this situation can be improved and create as much power using the renewable sources according to weather conditions and again increase the use of fossil sources when weather is not favorable for power sources like the wind and the sun. This way, even an individual household can create the most of their energy by using the renewable sources and increase the amount of power taken from the public energy grid if situation requires it.

The information of the energy consumption is almost live and the flow of information is two ways so the households can benefit from the system by monitoring their own consumption and the current price of electricity. This way they can target their use of electricity outside the price spikes and decrease the electricity bill. [2, 3, 4]

(14)

10

3 METHODS

In this chapter the methods that can be used to achieve the goals of the thesis are considered. How the possible patterns, features common to all data, from the electricity consumption data can be found and how to classify these patterns to recognize proper counter measures or decisions to take? How to find future patterns beforehand and how this information can be used? These are the questions considered and answered in this chapter. The final decision of the methods used in the work is not done yet. Only a wider look of available possibilities is taken.

3.1 Data mining and clustering

Data mining (DM) is a process of finding new information from the data. The content of the data depends on the situation and can contain shops data about customers buying’s or consumption data of the city’s energy grid. In any way, with DM it is possible to find unknown connections between variables, new rules or to connect certain data to a customer so that person can be recognized later. [5] The possibility for customer identifying is more related to clustering which is part of DM, but discussed later.

Lee and Siau presented in their report [5] different kind of techniques of DM and an overview of DM. According them there are three main steps of DM and six major techniques of DM. The three steps of DM are preparing data, reducing data and finding new information. In the first step the data is gathered from various sources to database (DB) and retrieving the data from DB. In the second step a portion of the data is selected. So, not all of the data is processed, but only a portion of it that is relevant to the current problem. In the second step also all unnecessary and broken data is removed from the step to minimize the size of the data set. The third phase is the process of finding information from the selected data set.

(15)

11

The techniques presented by Lee and Siau [5] are decision tree, artificial intelligence (AI), genetic algorithms, visualization, statistics and techniques of mining transactional/relational DB. So, Lee and Siau presented in 2001 that the genetic algorithms are a main technique in DM, but on their review in 2011 Venkatadri and Lokanatha [6] presented that the genetic algorithms are the DM technique of future.

Mainly the techniques presented in [5] and [6] are similar as the biggest difference is the mention of fuzzy logic as DM technique in [6].

When considering the types of DM, so what kind of data is mined to get the information, Venkatadri and Lokanatha present in their work [6] a list of different DM types. Their list consists of 5 different types: hypermedia, ubiquitous, multimedia, spatial and time series. Each DM type has also a bit different application area and techniques that are used to mine these kinds of data.

One part of DM is clustering which means dividing data into groups according to their similarity. In his work Berkhin [7] presents eights groups of clustering algorithms which are usable for clustering in DM. The most interesting groups are hierarchical and partitioning methods which include methods like the K-means and the density functions. These methods are the most likely ones to be used as part of the control system. Although these methods have problems handling a higher dimensional data it should not be a problem.

As mentioned the clustering is a process of dividing the data to groups and these groups can be used later to define the classes where to classify the input data. In Figure 3 is presented how a set of data is divided to 4 clusters and later in this thesis is presented how data can be classified to classes created from these cluster.

In her work Inniss [8] presented a seasonal clustering technique to cluster a time series data. The system was designed for avionic purposes and therefore it had a huge amount of seasons, 60. The idea is still the same with this thesis as the goal is to find cluster that can be defined according the season. In the work Inniss combined several different

(16)

12

techniques, but managed to create a technique that made it possible to cluster the time series data seasonally.

Figure 3. Clustering data to 4 clusters.

3.2 Classification

In classification the goal is to find out the classes, defined according to the cluster found earlier, in which the input data belongs to. The input data can be, for example, numeric information about the energy consumption. From that data pattern are looked. The patterns are features that are common to all inputs, but which differ a bit in every class so according to these features the input can be classified to one of the classes. In Figure 4 is shown how an input is classified to correct cluster found in Figure 4.

Figure 4. Classifying input.

(17)

13

For classification there are several different methods that can be taken into consideration. For example methods like NN, Probability Density Functions (PDF), Markovs Model (MM) or AdaBoost could be used. Of course the selection of used methods depends on the problem they should solve and the information that has been given. Each of the methods requires training before they can be used efficiently to detect the wanted patterns correctly. Training means that to the classification system a set of input data and the corresponding outputs, which the system should get, are given.

Using this data the classification system learns how to get the correct output from the given input. Therefore preliminary work should be done to recognize the prober classes to which classify the gathered data and to train the systems.

For this work the most interesting methods are NN and PDF. Reasons to select these methods are the vast possibilities that NN provides and the simplicity of PDF. With NN many kinds of input data can by classified with high accuracy if the structure of the network is correct and PDF provides straight forward methods for even complex tasks.

Therefore, a closer look to these methods should be taken.

3.2.1 Neural Network

Neural network (NN) or artificial neural network (ANN) is created to implement the functionality of the human brain. One of the best know ANN models is the multi-layer feed-forward neural network (MFNN) that consist of an input and a output layers and between them hidden layers. The knowledge of the MFNN is stored to the weights between the layers and with training the weights are set to the proper values to solve the certain problem. [9]

Qin, Ewing and Thompson compared in their work [9] the ANN and the autoregressive integrated moving average (ARIMA) model that is often used with a time series data. In the work they made a short period forecast of the wind speed using a ARIMA and a recurrent neural network (RNN), shown in Figure 5. They wanted to find out if the RNN could be used efficiently to forecast the time series data. In the work they

(18)

14

forecasted the wind speed only 15 minutes ahead using samples from five different heights and 30 periods. As a result they found out that the RNN gives better result than the more often used ARIMA model.

Figure 5. A recurrent neural network [9].

Cubiles-de-la-Vega, Pino-Mejías, Pascual-Acosta and Muñoz-García described in their work [10] a method to design a multilayer perceptron (MLP) with the information from a ARIMA model. They used the MLP to forecast the time series with a data set of population in the Andalusia and the hotel occupancy. In both cases the results of the MLP were better that the ones achieved using the traditional ARIMA model.

3.2.2 Probability density function

The Probability density function (PDF) shows the likelihood that a variable takes a certain value, for example the likelihood that the input variable of the system is 1. PDF can be used for forecasting [11, 12, 13] and for fault detection [14]. The method seems quite promising when thinking about how to realize the system of thesis. For forecasting the PDF has been used for the wind power [13] and for the power distributing systems [11, 12].

(19)

15

Heydt, Khotanzad and Farahbaksshian proposed already in 1981 a method [11] how to use a PDF for forecasting a power system load. They used the Gram-Charlier series type A for calculating the PDF from forecasted moments. For forecasting itself they used a time series or a least-square method (LSM). The forecast methods were tested with a short term forecast (1h) and over a longer period (20h). When used a shorter time periods (<2h) the time period approach seemed to be more precise, but over a longer periods the result where better using LSM. It is a bit strange to find an over 30-years old research that handles the very question that are needed to solve the current problems. It might have been toughed that have been in more wider and public use, but it does not seem so. Still the results they got were quite promising as the error of mean consumption was less that 3% half of the time.

In 1999 Charytoniuk, Chen, Kotas and Olinda did a similar research [12] as in [11].

They used the information of consumer consumption and the weather temperature as parameters. They also used a nonparametric approach, where data is not assumed belonging to any particular distribution, because that way there were not any need for a statistical analysis of the data. In the research the relative root mean square error (RRMSE) was less than 0.2 when forecasting the demand of 20 households over one month. When the same test was done with commercial customers, factories/plants, the error was closer to 0.1 even with only 5 customers. In the work it was also said that the accuracy of the method depends on the way customers are classified and the amount of the customers. [12] The energy consumption of the industry is more static and easier to predict than the consumption of households.

The previous paragraphs have shown that the PDF can be used to forecast the whole energy consumption, but it can be used similar way in a smaller scale also. Taylor, McSharry and Buizza presented method of using the PDF for forecasting the wind power [13]. When thinking about the project, this thesis is part of, this is quite interesting, because the smart energy grid is not only the main power lines that come to home, but also the smaller systems that can be connected to the network. This includes the wind power farms and also the individual windmills people can build on their own

(20)

16

use. So the PDF could be used to determine, when the windmill produces enough power to be useful. In the work they used data from five wind farms in United Kingdom and the weather forecasts from European Centre for Medium-Range Weather Forecast (ECMWF). The wind speed was measured from the height of 10m, which seems a bit low altitude. That altitude is below the tree line and usually, at least in Finland, the windmills are much higher to capture the optimal winds. As the data they used the daily wind speed from past 8 years. When the methods were tested the results were at least interesting. When forecasting past 2 days the 5-year mean of the windmill power production gave as precise answer as any other method used. So if we are not interested about the production of tomorrow, but want to see further, the mean of the earlier years can provide us all the information we need. In a way this is not a surprising result. The weather is a variable that is extremely difficult to forecast, but if we look it over a longer period it does not change that much. In shorter periods the forecasting of the wind speed is difficult or even impossible, but over longer periods it contains certain pattern it follows.

3.3 Decision making in smart grid

When thinking how the system makes decisions there are few methods that can be considered. This chapter goes through the classical theories of the decision making under uncertainty [14] and some of their modified version, which include more complex approaches like the fuzzy and the neural networks. [15, 16] Beside of the classical decision making theories a look to the rule engines is taken. The expert systems are not discussed in this chapter because they include same decision making methods as rule engines, but they are considered later in Chapter 6. [17, 18]

3.3.1 Decision making theory

How to make decision between the possible outcomes? How to take the uncertainty of the life into consideration and still find the best alternative from the given choices?

Kozine [14] gives us a simple overview of the non-conventional approaches to decision

(21)

17

making under uncertainty, so when all the facts are not know in the time of decision.

From there it can be continued to more sophisticated methods of the multi-attribute decision making, where are more than 1 attribute or variable to consider when making the decision, using the fuzzy or the grey theories. [15, 19] It is also interesting to give a look to the fuzzy fault diagnosis of Fan and Huang [16] as also the energy grids and their control systems are prone to have faults.

In the simplest form the decision making is selecting the best fitting outcome for the given variables. In this form the decisions do not prove to be very useful, because the method treats every possible outcome as equal, which is a rare situation in the real life.

To get more difference between the choices it is possible to turn to Bayesianism, which allows us to handle each outcome with some amount of probability. Though it gives to the decision making a bit of flexibility the rules of Bayesianism are still rigid and do not provide the elasticity required for the real life decision making. This problem can be solved using theories like Gårdenfors-Sahlin or Levi or by basing the decision making on the imprecise coherent previsions. Levis’s theory provides a kind of fuzziness as it makes possible to define the probabilities of the outcomes as a range instead of a specific value. So the possibility of the outcome might be between 0.4 and 0.6 instead of 0.5. This way also the selection might divide to more than one outcome as the probabilities overlap each other. With Gårdenfors-Sahlin’s theory the difference to the Bayesianisms comes from the way of using more than one probability to describe a situation and for adding a value of an epistemic certainty for each of the probabilities, basing the decision to prevision means using maximum and minimum values of the sum of multiplications between the probabilities and the utilities. This method gives an interval for each outcome, though the intervals may overlap, and makes possible to decide the proper outcome. [14]

As an improvement for the theories presented in the previous paragraph a look to a multi-attribute decision making theory (MADM) should be taken [15]. The idea of the theory is to provide methods for the decision making even when there are multiple criteria to taken into account. Inside this theory an interest is focused on two approaches: fuzzy and grey. [15, 19] The fuzzy and the grey theories of MADM are not

(22)

18

identical, but they do have some similarities when it comes to the meaning of the theories. The both theories bring the human kind approach to making decisions on their own way. The fuzzy theory [15] provides the human kind thinking, where the values are not precise, but more something fuzzy and easily describable by a linguistic variable. In the grey theory [19] the data might be unstructured, vague and deficient. So there are no ways to know for certainty if the provided data is even valid in the time of decision making. In a real life system the possibility to use these kinds of approaches to deal with an insufficient data is critical. In some cases part of the data is missing, out dated or corrupted, not usable, but the system has to do the decision even when the data is not complete.

The fault diagnosis by Fan and Huang [16] does not provide any new theoretical information of fuzzy in the decision making theories, but as a working and functional system it is an interesting sample of a fuzzy based diagnosis and decision making, which are included to the areas of interest in this thesis. The system includes a fuzzy hyper sphere set neural network working as a fuzzy rule set and a number of set of sensors providing data. The system works with live data and is capable of detecting faults in 0.02s. [16] This gives to understand that the fuzzy rule sets are capable of online situations requiring fast responses. From the point of the thesis this gives something to think about when considering the methods of the decision making. The frame of the system can be seen in Figure 6.

Figure 6. Frame of a fuzzy fault diagnosis [16].

(23)

19 3.3.2 Rule engines

The rule engines use their rule base to determine what they should do. So they try to find the rule that is a best match for the current situation. For matching the most used algorithm is still the Rete algorithm, although it was presented first time in 1970’s. [17, 18] When it comes to categorizing of the rule engines they can be divided into two main categories: forward and backward chaining. This division is done by comparison of the order they process their information. [17] More illustrating view of the difference of these categories can be done by observing the differences in Figures 7 and 8. The main difference is that in the forward chaining a decision is found according to the given variables so, attributes -> rules -> decision, when with the backward chaining it is the opposite decision -> rules -> attributes

Why to go through these methods and ignore the others? Mainly because of the Rete, as it is the foundation for many other methods, which still cannot outperform it, and because the Constraint Handling Rules (CHR) [18] provides improved performance compared to the Rete, although there are few differences between these two.

Figure 7. Forward chaining [17].

(24)

20

Figure 8. Backward chaining [17].

As it was observed the most used matching algorithm for the rule engines, the Rete, is already quite old, and therefore, Weert [18] provides us an overview of an another

(25)

21

approach called CHR. The language of the CHR is rich enough to create and describe the real-life type rules. This is of course understandable, because the used language defines the rules that can be obtained. So if the language does not provide the methods to describe the rules precise enough, it is difficult to create a rule set that would satisfy the needs of the real-life situations. The very basics of the language, the main core, include only the rules and the facts. When taken a closer look to the facts it can be found out that also the patterns can be found under the rules, which make the structure of the language very similar to the Rete. [18, 20] Only minor differences exist. The CHR also provides the priorities for rules, so for the rules a sequence can specified so that the rule never fires if its priority is too low. [18] As the Rete method is a base for several experts systems [17] it should not be surprising that more modern rule-based languages have very similar structure of the used language.

The rule base is an important part of the rule engine as it needs some place for the rules.

If the rule engine has no rule base to store the rules, it cannot find the correct decision to make and therefore its functionality is crippled to none. The importance of the facts and the methods to control them should also not be underestimated. Weert [18] defines that the CHR uses a fact base to administrate the facts of the system and that the basic functions of the fact base are: create, store, kill, alive and lookup. The Rete method handles facts in the working memory. The functionality is quite similar to the CHR as the Rete is capable of adding and removing the facts. With the Rete updating the fact includes removing the old fact and adding new one to replace it. [20] In this comparison the main advance that the CHR provides is the alive-command that makes possible to check if certain rule still exists or has already been removed by kill-command. Whether the alive-command gives any remarkable advances or not is questionable, but in some cases it probably proves to be useful.

As already mentioned the Rete is a de facto matching algorithm in for the rule engines in spite of its age [17, 18], so why even consider the CHR even though many other solutions have not been able to improve the performance of the Rete? [18] Weert [18]

claims that the CHR includes improvements and optimization methods, which would also improve the effectives of the Rete. So what has been improved in the CHR to

(26)

22

satisfy these claims? Weert [18] presents methods to optimize the join computations, to minimize the overhead in the fact base caused by indexing, to speed up activations of the facts and to optimize the use of the history in the CHR. First two methods presented by Weert are closely related because without proper indexing it is difficult to make the join questions from the database. The basic functionality of the rule engine is to find suitable matches to create form rules, so the join computations play a huge role in the rule engines, and therefore, a suitable target for optimization. Optimizing the fact activations includes removing an unnecessary rule from activations and altering the way a proper rule is found by facts. With history modification it is meant to improve the way activation history of the rules is held and to remove unneeded history information to minimize the size of the history. [18]

(27)

23

4 SYSTEM STRUCTURE

This chapter presents the structure of the planned system. In Chapter 2 the basic idea of the system structure was introduced and in this chapter a more specific presentation is given. This is done because the planned system differs from a usual energy grid in one important way. Usually the energy grids are centrally managed or the management is distributed to few places, but in this chapter a multi-agent approach is presented. The main difference between the multi-agent and the centrally managed systems is that in the multi-agent system every device could be its own master and do the local decisions on its own instead of using the main control system for decision making. The structure is quite similar than the one presented in Figure 1. The final structure can be seen in Figure 9.

Figure 9. Structure of a multi-agent control system.

(28)

24 4.1 Multi-agent system

In the multi-agent approach the control system is divided to smaller systems, agents, which can work autonomously without human interaction [21]. For each individual agent can be pointed a very specific set of actions it can take to solve the problems or tasks it might counter. When comparing this approach to a more common supervisory control and data acquisition (SCADA) system it is quite easy to see the benefits the multi-agent approach brings.

The SCADA is often a centralized [21] system or only partly distributed so the complexity of the system increases as more and more functionality is added to the system. This makes the system difficult to program to adjust for changing environment.

The centralized approach also puts the whole system in a risk in a case of malfunction.

If the SCADA does not work the whole system is down, because there is no other node that is capable of making the decisions or controlling the grid.

When using the multi-agent system (MAS) this kind of problems disappear at least partly. In the MAS there will not be a full system failure if single, or even several agents have a malfunction. As the agents are capable of communicating between each other it is even possible to route the necessary information trough different agent, so losing a agent would have no effect to the efficiency of the system. Of course the seriousness of the problem depend on the type of the malfunctioning agent, but still when looking the whole system only some part of the functionality is lost in this kind of case.

Ueda and Nagata [22] proposed in their work 5 different kinds on agents which could be used as parts of the smart grid: Smart Grid controller agent (SGC), Load agent (LAG), green power generation agent (GPAG), gas-turbine agent (GTAG) and battery agent (BAG) [22]. The SGC and the LAGs can be found even from small MASs as they provide the basic functionality of the control system. The GPAGs are an important part of the smart grid concept as they bring the possibility to use the renewable energy source, like the sun or the wind, as a part of the energy production. Of course, instead of

(29)

25

having only a GPAG for every green power source the concept could be divided to smaller parts, so every different energy source would have a specific type of an agent, like it is with the GTAG.

The concept of the smart grid is relatively new and there is not any established structure of the system. Siemens had a competition about the structure of the future smart grid.

There were two propositions for the structure using the MAS [21, 24]. Although both solutions concentrated on a microgrid [23] the results can be used when thinking about the whole smart grid. The architecture of an agent can be seen in Figure 10.

Figure 10. The architecture of an agent [21].

The microgrids can be seen as a construction blocks of the smart grid. They are small independent parts of the grid that can get the full benefit of the renewable energy sources and work as a producing units of the grid or outside it as separated islands. [22]

(30)

26

In the both works [21, 24] it is interesting that they take into consideration the possibility of the two-way power movement, so the consumers can buy the electricity as usual, but if they have own power production they can sell the extra power to the global grid. Logenthiran, Srinivasan and Khambadkone [21] also created a bidding system seen in Figure 11 which finds out the price to pay for the output that system can provide to the global grid.

Figure 11. A bidding system [21].

In their work Logenthiran and Srinivasan [24] provided a layout seen in Figure 12 of a multi-agent system that is capable of controlling a local microgrid and its several power production sources and communication with the global grid so the extra power can be sold there or saved into a battery for later use. When thinking about the agent roles that Ueda and Nagata [22] presented it is possible to see that almost all of the agent types are presented in the work of Logenthiran and Srinivasan [24].

(31)

27

Figure 12. Interaction of the grid [24].

(32)

28

5 COMMUNICATION

In an electric grid, smart or not, the flow of the information is extremely important because all that is know from the state of the grid or the amount of the consumed energy for example is data gathered from the devices from the network. When the communication fails even between few devices the whole system might come into problems depending on the system structure as was presented in Chapter 4. In this chapter a small peak into the communications in the system is taken, how the devices communicate and what might be done to ensure a secured and a constant dataflow without problems.

5.1 Communication between devices

The communication between devices mainly concentrates on the first level, when thinking about the structure shown in Figure 1. At the first level most of the data transfer is from households, measuring equipments, to the control equipments, Siemens Simatec. Because this is the main information source of the system it is important to make sure that none of the data is lost because of small errors. This is why the communication medium between devices has to be reliable and secure enough and there has to be a way to overcome the problems in single measuring equipment. Therefore there has to be a failsafe to make sure that the data coming from the measuring devices is always routed to a working measuring equipment if the closest/main one is not working for some reason. In the upper levels most of the communication is done only between the layers, although there, like at the first level, has to be some communication between the devices to make sure that every device is working and to distribute the data for remaining equipments in a case of malfunction in some of the measuring devices.

When considering the communication medium, there are few obvious choices: firstly using a cat 5, cable for transforming data, or higher in the first level and secondly using an optical fiber in the higher levels. The cat 5 provides enough bandwidth for the data

(33)

29

transitions and with limited distances, like at the first level, it is fast enough for the needs of the system. When the distances get longer the medium has to be changed, because the cat 5 is not meant for extreme distances. Thus the optical fiber can be used as a medium at the levels from two to five, as it is already used in the main networks, internet, it can handle the needed data amounts easily with low latency. The wired systems also provide us a higher level of security and reliability compared to the wireless systems. Still the communication can be arranged using wireless communication. Instead of the cat 5 a wireless local area network (wlan) could be used and the optical fiber could be replaced by a worldwide interoperability for microwave access (WiMAX) solution or, when the distances grow even longer, with a satellite communication. With the wireless solution there is always the problem of security so methods like the WPA2 should be used. Also with the long distances the latency might cause problems, because the satellite communication is not as fast as the communication over optical fiber.

5.2 Communication between layers

The cross layer communication is mainly done between a lower level SCADA system and the higher level control equipment or with the global processing center. Similar communication is also done inside the layers from the control equipment to the SCADA, but from the communication point of view it is the same thing as the cross layer communication.

There is not any direct way to get the data from the Simatic controller and to set the data there from the SCADA, but this can be done using the Siemens multi-point interface (MPI) that is the proprietary interface of Siemens logics. There are some libraries that provide programmers the methods to create programs to communicate with Siemens logics using the MPI. One of these is Prodave that is created by Siemens and therefore not free. It is only meant for a Windows environment, which might be a problem in some cases. A free library called Libnodave is also available. The library is not only for the Windows environment which makes it a considerable choice when thinking about

(34)

30

how to create the communication between the SCADA and the logic and it provides enough functionality to get and set the data to the logic using the MPI. At the highest level the communication is only between the SCADAs, so the method used to transfer the data is open as almost any method can be used. One likely solution is to collect the data directly from the SCADAs database.

(35)

31

6 CONTROL SYSTEM

The control system is realized using the multi-agent approach so each node, agent, of the system is a partly self-contained system that is able for independent working when it needs to solve the local problems. Although each agent is working individually they can communicate between each other to solve the global problems or to notify about the problems.

6.1 Agents

Each agent consists of the same basic structure and functionality as shown in Figure 13.

The only agent that differs from the others is the SGC that is also capable of data mining. Every agent is capable of decision making and forecasting the future consumption, but they also might have some different kind of action available depending on their position in the structure of the system. The basic actions are introduced in following chapters.

Figure 13. A skeleton of an agent.

(36)

32

The skeleton of an agent resembles lot the agent architecture given by Logenthiran, Srinivasan and Khambadkone [21]. However unlike in the work of Ueda and Nagata [22] the control system contains only two kinds of agents: the SGC and the LAGs.

Therefore the structure of the control system and the whole grid is quite simple as can be seen in Figure 10. The structure recalls a lot the original idea shown in Figure 1, but instead of using the SCADAs the system uses only the individual agents and also the system is fully connected using the Windows Communication Foundation (WCF) solutions so the communication model is not as restricted as originally proposed.

6.2 Data mining

The data mining provides the control system the means to adapt to new or changing situations. With the data mining it is possible to find the important relations from the input data and to find out how the relations affect to the grid. In this case the input data mainly consists of information about the electricity consumption and the status information of the agents, but it can also consists of the information about the weather temperature, the time of the year or some other attributes that are specific for a certain day or time of the year.

When all the data is numeric a method presented in Algorithm 1 can be used. Algorithm 1 takes a N*N matrix as an input where the rows represents the situations and the columns the attributes of the situations. The matrix is then normalized and the attributes that do not have effect to the result are removed.

Algorithm 1: Finding relations.

Input: Matrix of know situations and their attributes Normalizing the input

Calculate the matrix A

Find the eigenvalues and the eigenvectors Get rid of the non important attributes

Output: Matrix of known situations and their attributes without non important attributes.

(37)

33

In Algorithm 1 each column 𝑃𝑖 is normalized to 𝑍𝑖 according to

𝑍𝑖 =𝑃𝑖− 𝑚(𝑃𝑖) 𝑑(𝑃𝑖)

where the column mean m(𝑃𝑖) is calculated

𝑚 𝑃𝑖 = 1 𝑛 𝑃𝑘𝑖

𝑛

𝑘 =1

and d(𝑃𝑖) defined by

𝑑 𝑃𝑖 = 𝑚 𝑃𝑖 2 − [𝑚 𝑃𝑖 ]2

The symmetric diagonal matrix A is calculated:

𝐴 = 𝜏 𝑍𝑖, 𝑍𝑗 (1)

where 𝜏 𝑍𝑖, 𝑍𝑗 is defined

𝜏 𝑍𝑖, 𝑍𝑗 =𝑚 𝑍𝑖, 𝑍𝑗 − 𝑚 𝑍𝑖 ∗ 𝑚 𝑍𝑗 𝑑 𝑍𝑖 ∗ 𝑑 𝑍𝑗

The matrix A calculated in Equation 1 is used to find out the eigenvalues and the eigenvectors from which the important parameters of the data are found using equation

𝜆𝑛+⋯+𝜆𝑛 −𝑘 𝜆𝑖 𝑛𝑖=1

> 80% (2)

In Equation 2 is shown how the important attributes are found using a threshold of 80%

and the eigenvectors 𝜆𝑖. The final output matrix is then composed from the original

(38)

34

input using the columns from n-k to n where n is the number of the attributes and k is gotten from Equation 2.

Data mining is not only about finding the connection between the attributes using the statistical methods, but also grouping or clustering the input to groups according to their similarity. The system has to find the pattern, or the features, of each input so that each input has the same set of features, but the values of the features are similar inside group/cluster and different between the clusters.

6.3 Forecasting

The possibility to do the forecasting is an important part of the basic functionality of each agent in the system. This forecasting is one of the main actions with the methods of decision making. One way to do the forecasting is to combine the data mining technique seen in Algorithm 1 to the regression model in Algorithm 2.

Algorithm 2: Regression model.

Input: Matrix of the observations, from Algorithm 1 Estimate the parameters

Create the regression model Output: Regression model

The regression model can be estimated using methods like the LSE or the Ordinary Least Square (OLS). Both methods try to minimize the sum of squares of distances between observations

𝑆 = 𝑛𝑖=1𝑟𝑖2

and the linear model to find out the unknown variables of the regression model. Also different approaches, like the generalized linear model (GLM), can be used. The model of estimation

(39)

35

𝑦𝑖 = 𝜃0+ 𝜃1𝑥𝑖(1)+ ⋯ + 𝜃𝑝𝑥𝑖(𝑝)+ 𝜖𝑖 (3)

is same with the OLS and the GLM. In Equation 3 the 𝑦𝑖 describes the know output of the observation i and the 𝑥𝑖(𝑝) the variables of the observation. The 𝜀𝑖 is the disturbance term of the observation i bringing the level of uncertainty to the calculation. The 𝜃𝑖 stands for the unknown variable that needs to be estimated to get the final regression model that can be used to forecast the consumption.

The methods can also handle a binary data so even a non numeric data can be used. The non numeric data just has to be transformed to a form where it can be described as binary. If the data is only strings and cannot be converted to numerical or binary information then the data cannot be used creating the regression model.

6.4 Decision making

The system consist a centralized rule-based expert system which is able to help the agents from any level with their decision making. Only the highest level agent that controls the whole system has direct connection to the expert system, so all questions have to be directed trough this agent. Although this is not a very good approach in the distributed system it is definitely the easiest one and the main agent is the only one capable of data mining and therefore updating the rule base.

The expert system is created using Prolog programming language and it is independent from the main system. The Prolog was selected for programming language because the expert system uses only prewritten rules from the rule base, so in this case the decision making is only a logical conclusion and for that Prolog is good. Of course, the expert system could have been realized with the C# like the control system, but the simplicity of the expert system does not give any reasons to use complex languages like the C#.

(40)

36

The expert system works by getting the problem parameters from the main agent and then communicating with the rule base the system decides the best action for the current situation as show in Figure 14. The result is send to the main agent which sends it to the agents that originally required the answer.

Figure 14. Expert system.

Although the expert system is independent and does not require the control system to work it benefits from the control system. The main agent is the only agent in the control system that is capable of data mining, and therefore, finding new information or connection from the collected data. It is also capable of adding or upgrading the rules in the rule base used by the expert system. This connection makes the expert system more adaptive as it can evolve over time, and therefore, requires less human attention.

The rule base is only a set of if-else-rules keeping the system quite simple. This kind of simple approach does not make the system as effective as some kind of fuzzy approach, but in this case it is enough and if needed the expert system can be upgraded. An example of rules is shown in Table 1.

(41)

37 Table 1. Rules of the expert system.

IF agent1_status is 0 THEN agent1_offline

IF agent1_status is 1 AND agent1_consumption is 0 THEN gridProblem IF forecast >= production THEN blackout_warning

ELSE network_status is 1

(42)

38

7 DISCUSSION AND FUTURE WORK

In this work a structure for a control system using a multi-agent approach has been presented. The functionality of the agents has been described, but the final methods, that agents should use has not been presented as the required information for creating those methods is missing. Therefore, in this chapter the future development of the system is considered and for some functionalities of the system also some possible solutions are considered.

7.1 Communication

As mentioned earlier in this work the communication has an important role in the control system and its importance even increases when using multiple agents which have to be able to communicate between each others to do their jobs. Also the used communication methods between each agent have been narrowed down to the WCF, because the whole system is done using Microsoft technologies and more precisely the system is mainly coded using the Microsoft C# programming language.

The main question when implementing the communication is the location of the server that handles the data transform between agents. Each agent has to have an address from where they can be located in any case was the agent located behind the public internet or inside a Local Area Network (LAN), the location depends on the size and use of the control system. Therefore, there are two possible approaches locating the server.

The simplest solution is to put the WCF server to the top of the hierarchy with the main agent that controls the whole system. In this case all communication between the agents would go through that point. With small solutions this would not cause any major problems, unless there are some connection problems with the highest layer, but when the system grows the amount of the data transfer might conquest that point and cause delays in communications. Also as the size of the system grows the distances between

(43)

39

individual agents and the main agent and the WCF server increases. Again this might cause some delay to the communications. Although the latency would not increase to intolerable numbers it would still cause problems, because it is important that the agents can work as close to the real time as possible. The scheme of this kind of communication structure can be seen in Figure 15.

Figure 15. Communication with a centralized WCF server.

The second possible approach is to implement a simple server to each agent, so all communication can be done directly between the participating agents as show in Figure 16. Unfortunately this would make the structure of each agent a bit more complex, which is something that should be avoided as it is desirable to keep the agents as simple as possible. Still this kind of distributed approach would make the communication more direct and some redundant data would not conquest rest of the system.

Figure 16. Communication with a distributed WCF server.

(44)

40

Of course something between these two approaches could be implemented, but when considering the possibility that the system grows over time it might be difficult to decide where the nodes of the communication model should be. Therefore, these two approaches are the most practical ones.

7.2 Data mining

One important part of each agent’s functionality is the ability to make decisions to find out solutions for the current local or global problem. Data mining is one part at this process as it is necessary to upgrade the rules of decision making or add new rules. The rules can be added or modified by hand, of course, but when considering an automated system it is important that the system itself can find new connections, which can be used to update or create the rules, from the data gathered earlier.

The current system can find the important relations between the attributes and to cluster the data for a base of classification, but the system cannot update the rules of the expert system or to add them. The difficulty of manipulating the rules in the rule-base is that the rules have to be exact. The data mining process gives guidelines on how the variables are connected to the each other or what kind of results can be expected with certain input attributes, but the process does not provide a specific rule for each situation. It does not give the possibility to see directly that “if A is B the result is C”.

The same problem is faced when updating or adding the rules. With the update process the situation is even a bit more complicated as the system does not know the old features, only the rules. So, it is not only about the process of creating a rule from the found information but manipulating the existing rule according the new information when the old information is possible not known. So, a solution have to handle this transform from information to rule has to be found.

(45)

41 7.3 Decision making

The current decision making mechanism is a simple rule-based expert system that uses if-else-rules to define what kind of decision to make. Though the system is functional it lacks the possibility to do any sophisticated decision. This kind of higher level of decision making would require more flexible way of thinking, methods which would result to more human kind actions.

This kind of decision making could be achieved using the fuzzy rules instead of the more strict if-else-rules. This would make the expert system more complex and updating the rule base would be more difficult, but at the same time the system could adjust to more different situations where there are not only one solution, but the correct answer might be somewhere between two possible solutions.

7.4 Reliability

The functionality of the whole system depends on the communication between the agents, so if the messages cannot get to their destinations the system cannot function properly. Therefore, it is important to make sure that the system can handle different kind of problems on its own or at least inform the proper party about the problems it is facing.

The most likely problem is that one of the agents does not work for some reason. In this case all the communication and the data flow that normally would be directed to that agent are transferred to some other agent in the same level. The correct level of the destination agent is important as the actions of the agent can differ between the levels.

The communication between different agent is not a problem as the agents are fully connected to each other, but the problem is how to find out when the agent does not work and how to inform the destination agent that the incoming information would usually go to someone else.

(46)

42

The original destination is important to know because when the system forecasts the energy consumption if the destination is not mentioned and the data from one city region is transferred to other region, because the local agent does not work, the forecast will give very strange results and the grid will be unbalanced.

Some kind of a failsafe should be designed to prevent any kind of situations which would risk the continuous data flow. Basically the functionality should be part of each agent, but the implementation of this functionality has to be done in future.

7.5 Visualization

The visualization of the whole system probably does not seem important, when taking to consideration that each part of the system is intelligent enough to handle its own problems or to tell someone about the problems so those can be solved. Still the importance of human interaction cannot be underestimated, although the agents are intelligent their intelligence is depended on the decision making system that cannot know something from the input data if it has not happened before often enough to be noticed by the system or imported by hand. This is why the visualization is needed.

A computer can make its decision using only a raw numeric data as a source, but for humans this is more or less impossible in an acceptable time frame. But when this information is visualized humans can find different information from the data than the computer. Of course it depends from the person how he/she interprets the given picture, but an experienced person can see the possible problems from the picture before the computer has gathered enough data to make its decision. These human made decisions and the situations that caused them can be stored to a database so next time a similar situation appears the system can handle it on its own.

The visualization can include the current energy consumption and the forecast of the future forecast for the whole system and the individual customers. This way the status of the whole energy grid can be seen at ones. In some cases the communication between

(47)

43

the agents can also be visualized. For example, if one of the agents has been dropped out from the network or one of the agents needs some help in a current problem. In these situations a human interaction can be included if the system does not react for the problem for some reason or does a wrong decision.

7.6 Distributed energy production

In the smart grid the used electricity does not come only from a classical nuclear or coal plants or from a bit modern centralized wind mill or solar power farms. The electricity comes from every house, factory, plant or farm that is connected to the energy grid. This kind of totally distributed energy production show what can be achieved using the smart grids, although the system itself is difficult to produce. Still there are some research reports that take the possibility to sell the extra power produced to the global energy grid into consideration [20, 23] and there is also Finnish researchers trying to find out a functional solution to this problem [2, 3, 4].

In this work an approach of the two directional energy flows was not really considered more than the level of a possible idea mainly because the energy production at home is not that usual, at least at the moment, and because it would have required creating the agents for more complex tasks than they were designed. Still, when considering the next version of this system a distributed energy production is one important part that should be taken into consideration as it takes the whole energy market to a whole new level where anyone can be a producer.

Of course, the problem itself is quite complex, as it is not enough just to measure the power consumption and the production of a house, for example, but the system should be able to transfer the extra power from the production facilities to the global grid and to measure the amount of the transferred power so it can be billed properly. At the same time the system should switch of incoming connection from the global grid and use only the locally created power monitoring the power levels all the time, so no blackouts would occur.

Viittaukset

LIITTYVÄT TIEDOSTOT

Liikenteenohjauksen alueen ulkopuolella työskennellessään ratatyöyksiköt vastaavat itsenäisesti liikkumisestaan ja huolehtivat siitä että eivät omalla liik- kumisellaan

Š Neljä kysymystä käsitteli ajamisen strategista päätöksentekoa eli ajamiseen valmistautumista ja vaaratekijöiden varalta varautumista. Aiheita olivat alko- holin

Halkaisijaltaan 125 mm:n kanavan katkaisussa Metabon kulmahiomakone, Dräcon le- vyleikkuri, Milwaukeen sähkökäyttöiset peltisakset ja Makitan puukkosaha olivat kes-

Huonekohtai- sia lämpötilan asetusarvoja voidaan muuttaa sekä paikallisesti että kenttäväylän kautta mikrotietokonepohjaisen käyttöliittymän avulla..

Konfiguroijan kautta voidaan tarkastella ja muuttaa järjestelmän tunnistuslaitekonfiguraatiota, simuloi- tujen esineiden tietoja sekä niiden

Järjestelmän lämpötilat käyttöveden toisen juoksutuksen aikana (mittaus 1.11.95, LKV = 0,150 L/s)... Järjestelmän lämpötilat latausjakson aikana

As what was said in Section 3, the electricity market buys all of the energy that each agent wants to sell to the grid but it determines a limitation for selling to this

A suitable battery energy storage system along with its control algorithm is designed for Vaasa harbour grid with the obtained real data of annual power consumption and