• Ei tuloksia

This section diagrams the core execution of CloudSimDisk, including communication between components, events passing activity and main methods execution.

As a starting point, CloudSim.startSimulation() starts all the entities and gen-erates automatically the rst events of the whole simulation process. At 0.1 second, one of these events calls the method submitCloudlets() of the broker, responsible to send Cloudlets one by one to the data center. Therefore, for each Cloudlet, one event is scheduled at destination to the data center (see Figures 16 and 17)4. These events have the Tag CLOUDLET_SUBMIT, a scheduling time dened by the distribu-tion chosen by the user, and it contains the Cloudlet as "event-data". Next, data center calculates the transaction time for each le of the Cloudlet that need to be added to, or retrieved from, the persistent storage. At the same time, it generates a conrmation event at destination to itself, with the Tag CLOUDLET_FILE_DONE and delayed by the calculated transaction time plus the eventual waiting delay due to request queue on the target disk.

Figure 16. Event passing sequence diagram for "Basic Example 1", 3 Cloudlets.

Figure 16 presents a simple example where the transaction time of each cloudlet is inferior to the cloudlet arrival time intervals. Hence, there is no waiting delay.

Figure 17 presents an example based on real word workload (wikipedia). In this case, the cloudlet arrival rate is more important, so the interval time between two cloudlets is smaller. As a results, cloudlets have to wait in the disk queue before execution.

4For the sake of simplicity, each Cloudlet contains only one le that needs to be added to the persistent storage, itself composed of only one HDD.

Figure 17. Event passing sequence diagram for "Wikipedia Example", 3 Cloudlets.

As a reminder, the Transaction Time is the sum of the Rotation Latency, the Seek Time and the Transfer Time, obtained according to the target HDD characteristics (see 3.2.1). The waiting delay is the time the request spends in the disk queue, waiting to be executed. If no requests are in the queue, the waiting time is zero.

Figure 18 presents in detail what happens when the data center receives an event Tag CLOUDLET_SUBMIT. It begins by retrieving the Cloudlet object from the data parameter of the event. Then, it retrieves the Cloudlet DataFiles which is a list of les that contains zero, one or many les (see rst part of Figure 18). Each le is added to the persistent storage according to the chosen algorithm (see 3.2.4).

The transaction time for an operation is returned by the HDD, and also stored in the attributes of the File so that we can access this information later. After each operation, the method processOperationWithStorage(...) is called to handle the waiting delay of the request, the queue size of the HDD and the operation mode of the HDD. This method is the fruit of long thoughts, gathering logical, mathematical and engineering skills. Also, this method generates the CLOUDLET_FILE_DONE events that will be used for output results.

When all the data les have been handled, the same scenario is performed for the required les of the Cloudlet, except the list of required les is a list of lenames that need to be retrieved, and so getFile(Filename_n) returns the requested File (see second part of Figure 18).

Figure18.ProcesswhendatacenterreceivesaCLOUDLET_SUBMITevent.

On the HDD side, adding a le can be decomposed in three phases, chronologically organized (see Figure 19):

• Firstly, the transaction time is determined by retrieving successively the seek time, the rotation latency and the transfer time for the concerned le. All this information is dependent on the HDD model used.

• Secondly, the list of les, the list of lenames and the space used on the HDDs are updated relatively to the le added. This action is necessary to keep a track on the content of each HDD. Also, it facilitates the implementation of le stripping.

• Thirdly, the le's attributes "transactionTime" and "RessourceID" are set respectively by the transaction time determined in phase I and the ID of the concerned HDD. Hence, this information can be reused later, for example, to analyze on which HDD les are added.

After these three phases, the addFile method returns the transaction time. Note that phase II does not exist for getting a le since the les on the HDDs are not changed, so the list of les, the list of lenames and the space used on the HDD are unchanged. Also, the "ResourceID" is not modied in phase III since the le is still stored by the same device. Moreover, the le object needs to be retrieved from the list of les in the HDD before phase I. In order to do that, an iterator is instantiated to run through the list. A while loop compares the lenames of each element in the list until the required le is found. If the name of the required le does not match any les stored on the persistent storage, a null File object is returned by the method getFile(fileName), otherwise the matched le is returned.

CloudSimDisk users can dene any type of request arrival distribution as parameter of the simulation, so the data center has to implement a scalable algorithm that handle the persistent storage in any situation. This feature is provided inside the method processOperationWithStorage().

Now, remember that the datacenter entity has to handle the persistent storage, all along the simulation. This includes updating the storage state (idle or active) if needed and to keep a history of the time spent in each mode by each HDD. This task is not easy going since it mixes dierent times, delays and durations. Moreover, new cloudlets can arrive at any time to add or to retrieve some les.

Figure19.HDDinternalprocessofaddingale.

Figure 20a depicts an example to understand how the state of the persistent storage is managed during a simulation. Additionally, Figure 20b shows the Java code responsible for this process, and Table 5 summarizes the key time values. The example consider three cloudlets, arriving at three dierent times.

When a cloudlet arrives to interact with the persistent storage, two cases can be identied:

• The target HDD is in Idle mode. In that case5: the waiting time is null;

the active end time is the current time plus the transaction time;

the event delay is the transaction time;

the storage is set in Active mode.

• The target HDD is in Active mode. In that case6:

the waiting time is the active end time minus the current time;

the active end time is increased by the transaction time;

the event delay is the waiting time plus the transaction time.

Note that in both cases, the transaction time is retrieved from the le's attributes and the total active duration of the target HDD is incremented by this duration.

Further, the power needed by the HDD in active mode is retrieved, the energy is computed (based on the transaction time), and the conrmation event is scheduled carrying all the previously established variables.

Table 5. Trace of HDD values related to Figure 20.

Clock() TransactionTime WaitingTime ActiveEndat EventDelay

At 0.311 s 0.014 s 0.000 s 0.325 s 0.014 s

At 0.321 s 0.008 s 0.004 s 0.333 s 0.012 s

At 0.326 s 0.002 s 0.007 s 0.335 s 0.009 s

5HDD is processing nothing.

6HDD is already processing one or more request(s).

(a) HDD management at the datacenter level.

(b) Code snippet showing the storage's state update when receiving a new Cloudlet.

Figure 20. Example of storage management with one HDD: (a) graphic; (b) code.