• Ei tuloksia

A Memory-Efficient and Time-Consistent Filtering of Depth Map Sequences

N/A
N/A
Info
Lataa
Protected

Academic year: 2023

Jaa "A Memory-Efficient and Time-Consistent Filtering of Depth Map Sequences"

Copied!
12
0
0

Kokoteksti

(1)

A Memory-Efficient and Time-Consistent Filtering of Depth Map Sequences

Sergey Smirnov, Atanas Gotchev, Karen Egiazarian

Department of Signal Processing, Tampere University of Technology, Tampere, Finland

ABSTRACT

‗View plus depth‘ is a 3D video representation where a single color video channel is augmented with per-pixel depth information in the form of gray-scale video sequence. This representation is a good candidate for 3D video delivery applications, as it is display agnostic and allows for some parallax adjustments. However, the quality of the associated depth is an issue, as the depth channel is usually a result of estimation procedure based on stereo correspondences or comes from a noisy and low-resolution range sensor. Therefore, proper filtering of the depth channel is needed before it is used for compression and/or view rendering. The problem is even more pronounced in video, where temporal consistency of the depth sequence is required.

In this paper, we propose a filtering approach to refine the quality of noisy, blocky, and temporally-inconsistent depth maps. We utilize color constraints from the video channel and modify a previous super-resolution approach to tackle the time consistency for video. Our implementation is fast and highly memory efficient. We present filtering results demonstrating the superiority of the developed technique.

Keywords: depth map, bilateral filtering, time-consistency

1. INTRODUCTION

Depth map is gray-scale image which encodes the depth of a given scene from a viewpoint. Usually, it is aligned with and accompanies the color view of the same scene thus forming a 3D scene representation, informally called ‗view plus depth‘, where each pixel of the depth map gives the position of the corresponding color pixel from the view with respect to the camera 1. Figure 1 illustrates the concept of view plus depth 3D representation for the popular test sequence ‗Ballet dancer‘.

Figure 1. Illustration of Video + Depth representation

(2)

As 2D functions of spatial coordinates, depth maps are piecewise-smooth while exhibiting objects of constant or gradually changing depth where sharp edges delineate pieces (objects) of different depth. View plus depth is an attractive 3D representation format for 3DTV and free viewpoint television applications, as it allows for synthesizing desired views through so-called depth image based rendering (DIBR) 2. Since the depth is given explicitly, it can be rescaled and maintained to address the parallax issues of 3D displays of different size 3. Problems with disocclusions in the synthesized views are tackled by occlusion filling 4 or by extending the format to ‗multi-view multi-depth‘3.

Depth maps are usually obtained through ‗depth-from-stereo‘ or ‗depth-from-multiview‘ type of algorithms based on finding disparities between camera images and converting these to corresponding depths 5. Special depth sensors based on time-of-flight (ToF) principles 6 or laser scanners 7 or structural light 8 have been also used to form depth maps. Each of these image formation approaches brings its own advantages and drawbacks. The quality of the delivered depths also varies from approach to approach. Depth from stereo might suffer from inaccuracies in finding the disparity correspondences, especially when real-time performance is targeted. ToF images bring precise depth measurements but usually rather noisy and with lower resolution compared to the corresponding video frame. In addition, depth maps usually undergo compression operations and might suffer from blocky artifacts while compressed by contemporary methods such as H.264 9. When accompanying video sequences, the consistency of successive depth maps in the sequence becomes an issue. Time-inconsistent depth sequences might cause flickering in the synthesized views as well as other 3D-specific artifacts 10.

Filtering of depth maps has been addressed mainly from the point of view of increasing the resolution 11,12,13,14

. In 12, a joint bilateral filtering has been suggested to upsample low-resolution depth maps. The approach has been further refined in 13 by suggesting proper anti-aliasing and complexity-efficient filters. In 11, a probabilistic framework has been suggested. For each pixel of the targeted high-resolution grid, several depth hypothesizes are built and the hypothesis with lowest cost is selected as a refined depth value. The procedure is run iteratively and bilateral filtering is employed at each iteration to refine the cost function used for comparing the depth hypotheses. In our previous work, we have compared several methods for restoring heavily compressed depth maps 14. We have optimized state-of-the-art filtering approaches, such as local polynomial approximation 15 and bilateral filtering 16 to utilize edge-preserving structural information from the color channel for refining the blocky depth maps. In our comparison tests, the method based on 11 showed superior results for the price of high computational cost.

The time-consistency issue has been addressed mainly at the stage of depth estimation 17,18 either by adding a smoothing constraint along temporal dimension in the depth estimation global optimization procedure or by simple median filtering along successive depth frames.

In this contribution, we address the problem of filtering of depth map sequences, which are impaired either by inaccurate depth estimation or noise or compression artifacts. We modify the approach from 11 to make it faster and more memory- efficient and we extend it toward video to tackle the time-consistency issue.

2. DEPTH MAP FILTERING APPROACH

2.1 Problem formulation

We consider color video sequence in YUV color space , accompanied by the associated per-pixel depth , where is a spatial variable being the image domain, and

is frame index. A new, virtual view can be synthesized out of the given

(reference) color frame and depth at time t, employing DIBR 2. The synthesized view is composed of two parts, , where denotes the visible pixels from the position of the virtual view camera and denotes the pixels of occluded areas. The corresponding domains are denoted by correspondingly, .

We consider the case where the depth sequence has been degraded by some impairment added to the true depth:

In some cases it is useful to model the noise component as independent white Gaussian process This simple modeling has proven quite effective for mitigating e.g. blocking artifacts 19.

Finally, we denote by the virtual view synthesized out of the degraded depth and by the virtual view synthesized out of processed depth and the given reference view. The goal of the depth filtering is to get a depth estimate closer to the ground true depth sequence and providing synthesized virtual view with improved quality.

(3)

2.2 Hypothesis filtering of individual frames

In the original approach 11, a 3D cost volume is constructed frame-wise out of several depth hypothesizes and the hypothesis with lowest cost is selected as a refined depth value at the current iteration. More specifically, the cost volume at the i-th iteration is formed as truncated quadratic difference

, (1)

where d is the potential depth candidate, is the current depth estimate at coordinates x and L is the search range controlled by a constant . The time index t is omitted for simplicity. The obtained slices of the cost volume for different values of d somehow keep the degraded pattern of z, as illustrated in Figure 2 left. Therefore, each slice of the cost volume undergoes joint bilateral filtering, i.e. each pixel of the cost slice is obtained as a weighted average of neighboring pixels where weights are also modified by the color similarity measured as l1 distance between the corresponding pixel of the color video frame and the neighboring ones

(2)

where , and is the neighborhood of

coordinate x. The reason of applying bilateral filtering is two-fold: it assumes the depth reflects the piecewise smoothness of the surfaces of the given 3D scene and that the depth is correlated with the local scene color (same local color corresponds to constant depth). In our previous work 14 it was experimentally confirmed that filtering of the cost volume (1) is more effective than directly filtering the noisy depth.

After bilateral filtering, the slices get smoothed (Figure 2 right) and the depth for the next iteration is obtained as

. (3)

Figure 2 Result of filtering of cost volume. Left: unfiltered cost function; right: bilaterally-filtered cost function.

The hypothesis filtering approach is illustrated in Figure 3. The approach methodologically assumes three steps: (1) form a cost volume, (2) filter the cost volume, (3) peak the min hypothesis. In the original approach 11 a further refinement of the depth is suggested: instead of selecting the depth giving the minimum cost, as of Eq. (3), a quadratic function is fit around that minimum and the minimum value of that function is selected instead.

(4)

Figure 3 Block diagram of hypothesis filtering

2.3 Practical implementation of the frame-wise filtering

In this subsection we suggest several modifications to the original approach to make it more memory-efficient and to improve its speed. It is straightforward to figure out that there is no need to form cost volume in order to obtain the depth estimate for a given coordinate x at the i-th iteration. Instead, the cost function is formed for the required neighborhood only and then filtering applies, i.e.

. (4)

Furthermore, the computation cost is reduced by assuming that not all depth hypotheses are applicable for the current pixel. A safe assumption is that only depths within the range where

have to be checked.

Figure 4 Histogram of non-compressed and compressed depth map

Additionally, depth range is scaled with the purpose to further reduce the number of hypothesizes. This step is especially efficient for certain types of distortions such as compression (blocky) artifacts. For compressed depth maps, the depth range appears to be sparse due to the quantization effect. Figure 4 illustrates histograms of depth values before and after compression so to confirm the use of rescaled search range of depth hypotheses. This modification speeds up the

50 100 150 20050 100 150 200

(5)

procedure and relies on the subsequent quadratic interpolation to find the true minimum. A pseudo-code of the suggested procedure in Eq.(4) is given in Table 1.

Table 1. Pseudo-code of modified hypothesis filtering

Rescale the range of Noisy Depth Image For every (x,y) in Noisy Depth Image

D = read window of depth frame around (x,y) C = read window of color frame around (x,y) W = calculate bilateral weights from C;

Xmin = max color difference;

For d=min(D) to max(D)

X = W*MIN((D-d)^2, threshold)/W;

If sum(X) < Xmin Depth_new(x,y) = d;

Xmin = sum(X);

End End End

Rescale the range of Filtered Depth

Figure 5 Execution time of different implementations of filtering approach

Figure 5 illustrates the achievements in terms of speed. The figure shows experiments with depth filtering of a scene with different implementations of the filtering procedure. All implementations have been written in C and then compiled into MEX files to be run from Matlab environment. The vertical axis shows the execution time in seconds and the horizontal line shows the number of slices employed (and thus the dynamic range assumed). In the figure, the dotted curve shows single pass bilateral filtering. It does not depend on the dynamic range but on the window size, thus it is a constant in the figure. The red line shows the computational time for the original approach implemented as a three step procedure for the full dynamic range. Naturally, it is linear function with respect to the slices to be filtered. Our implementation (blue curve) applying reduced dynamic range is also linearly depending on the number of slices but with dramatically reduced steepness.

50 100 150 200 250

0 50 100 150 200 250 300 350 400

Slices

seconds

Bilateral filter directly on depth Original approach

No cost volume

(6)

2.4 Extending the filtering approach to video Eq. (4) is extended to video sequences as follows

(5)

where . This essentially means,

that the depth hypotheses are checked within a parallelogram around the current depth voxel with coordinates (x,t).

While the neighboring voxels are weighted by their color similarities to the central one, the temporal distance is penalized separately from the spatial one to enable better flexibility in tuning the filter parameters. Note, that the video filtering uses no explicit motion information. No motion estimation/compensation is applied. We rely on the color (dis- )similarity weights to suppress sufficiently depth voxels changed considerably by motion. The hypothesis filtering procedure for video is illustrated in Figure 6.

Figure 6. Extension of hypothesis filtering to video

3. EXPERIMENTAL RESULTS

3.1 Quality measures

The following quality measures applied frame-wise have been used to quantify the performance of the proposed technique.

- PSNR of Restored Depth compares the impaired or filtered depth against ground true depth (when available).

- PSNR of Rendered View does the same over the rendered view.

- Percentage of bad pixels is a measure originally used to compare estimated depths from stereo 20. For images with number of pixels, it counts the number of pixels differing more that a pre-specified threshold

. (6)

The measure can be applied to all pixels in the image or to pixels near discontinuities only, so to emphasize the change of the quality for those areas.

(7)

- Depth Consistency measures the percentage of pixels, having magnitude of the gradient of the difference between true depth and processed depth higher than a pre-specified threshold

(7) The measure gives preference to non-smooth areas in the processed depth considered as main source of geometrical distortions.

- Gradient-normalized RMSE calculates RMSE over the luminance channel of rendered image, excluding true occluded areas and then normalizes it by the gradient, thus penalizing local intensity variations in textured areas

21.

(8)

3.2 Experiments

We present two experiments. In the first experiment, we consider the depth sequence as estimated from noisy stereo sequences. Namely, a given stereo sequence and is used to estimate the depth sequence . Then, white noise is added to the stereo video to obtain noisy stereo video , which is used to estimate the impaired depth sequence The latter is filtered by the suggested video hypothesis filtering. For comparison, median filtering is applied to the noisy depth sequence and to the per-frame hypothesis filtered data. In our practical setting, we have used a stereo pair of the ‗Cone‘ test data from the Middlebury Evaluation Test bench 22. For that given stereo pair we have the ground true depth and we also estimated the depth by the method in 23. To simulate a stereo video, we repeated the stereo pair 40 times to form 40 successive video frames, then added different amount of noise to each frame and estimated the depth from each so-obtained noisy stereo frame. The results of different filtering techniques applied to the noisy depth sequence are given in Figure 7. The results are consistent over all measures and show considerable improvement along the temporal dimension when the video extension of the hypothesis filtering is applied. The video hypothesis filtering not only manages to equalize the quality along the time axis but also improves the depth estimates compared to ones obtained from noise-free data by the method from 23.

In the second experiment we simulate blocky artifacts in the depth channel. To create ground true video plus depth, we circularly shifted the same ‗cone‘ sequence with a radius of 10 pixels also adding some noise to the shifting vectors and then crop the central parts of the so-obtained frames. Thus, we got a sequence simulating circular motion of the camera plus some small amount of shaking. The sequence was compressed by H.264 encoder in IPIPIP mode varying slightly the quantization parameter (QP) per frame to simulate different amount of blockiness in successive frames. The filtering results are presented in Figure 8. We kept the following filters: single-frame hypothesis filter, the same followed by media filtering along time, and video hypothesis filtering. As it can be seen in the figure, the video version of hypothesis filtering has the most consistent performance. It performs especially well around edges. The rendered frames are with similar quality thus providing smooth and flicker-free experience. The only exception is the BAD metric, where the compressed depth seems to be the ‗best‘. The metric, originally introduced to measure the performance of depth estimation algorithms, simply counts differences between ground true and processed pixels no matter how big or small (but above a threshold) the differences are. While all filtering algorithms introduce small changes over the whole image, those small changes seem to be more in percentage than the number of different pixels in the quantized depth image.

However, what really matter are the bigger differences appearing around edges. These are well tackled by the filtering, as seen in the other metrics. Especially informative is the NRMSE, which measures the quality of the rendered channel being closer to the human perception. There, the new filtering approach truly excels.

Finally, we provide some visual illustrations on the performance of the algorithm. We use the ‗Book arrival‘ sequence provided by Fraunhofer HHI, where the depth is estimated by the MPEG depth estimation software 24. While it incorporates rather powerful techniques and yields high-quality and time-consistent depth maps, our technique still adds some improvements. Figure 9 shows the result of filtering for frame 20. From left to right, the figure shows the originally-estimated depth, then the depth obtained after median filtering along time, and then depth resulting from the proposed method. The depth estimation has failed around the face of the person entering the room and at the floor area.

Median filtering manages to correct the depth of the floor but fails to correct the face of the person. The proposed

(8)

method restores both the floor and the face. The same sequence has been compressed/decompressed with H.264 intra- frame and then filtered. The result of decompression and filtering is shown in Figure 10. Again, despite the substantial blocking artefacts, details as human faces have been successfully restored.

4. CONCLUSIONS

We have suggested a fast, memory-efficient and high-quality filtering approach for depth map sequences. Based on the approach in 11, it utilizes color information from the associated video channel and also adapts to the true depth range and its structure. As a result of the efficient data structure for processing, our technique delivers highly time-consistent depth sequences. The technique is especially applicable to depth sequences impeded by blocky artifacts as result of block- transform based compression. For such sequences, it is possible to tune the filtering parameters depending on the quantization parameter of the compression engine in a fashion similar to 14. The technique is also applicable in depth estimation scenarios where the depth quality is compromised by noisy data or requirements for quick processing. The approach does not require knowledge of the motion or optical flow as it relies on the color weighting to discard non- suitable pixels from adjacent video frames in the filtering domain.

ACKNOWLEDGEMENTS

This work was supported by EC within FP7 (Grant 216503 with the acronym MOBILE3DTV and by the Academy of Finland (project no. 213462, Finnish Programme for Centres of Excellence in Research 2006-2011)

REFERENCES

1 A. Alatan, Y. Yemez, U. Gudukbay, X. Zabulis, K. Muller, C. Erdem, C. Weigel, A., ―Scene Representation Technologies for 3DTV—A Survey,‖ IEEE Trans. Circuits and Systems for Video Technology, vol.17, no.11, pp.1587-1605, Nov. 2007.

2 C. Fehn, "Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3D-TV," in Proc. SPIE Stereoscopic Displays and Virtual Reality Systems XI, 2004.

3 Vetro, A.; Yea, S; Smolic, A., "Towards a 3D Video Format for Auto-Stereoscopic Displays", SPIE Conference on Applications of Digital Image Processing XXXI, Vol. 7073, September 2008.

4 S. Kang, R. Szeliski, and J. Chai, ‘Handling Occlusions in Dense Multi-view Stereo‘, in Proce. IEEE Conf. CVPR, vol. 1, pp. 103-110, 2001.

5 D. Scharstein and R. Szeliski, "A taxonomy and evaluation of dense two-frame stereo correspondence algorithms.,"

International Journal of Computer Vision, vol. 47, pp. 7-42, April-June 2002.

6 J. Zhu, L. Wang, R. Yang and J. Davis, ―Fusion of Time-of-Flight Depth and Stereo for High Accuracy Depth Maps‖, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2008.

7 P. Biber, S. Fleck, F. Busch, M. Wand, T. Duckett, and W. Straßer, 3D Modeling of Indoor Environments by a mobile Platform With a Laser Scanner And Panoramic Camera, in: 13th European Signal Processing Conference (EUSIPCO 2005), 2005.

8 R. Szeliski and D. Scharstein, "High-accuracy stereo depth maps using structured light," in Computer Vision and Pattern Recognition, Madison, 2003.

9 P. Merkle, Y. Morvan, A. Smolic, D. Farin, K. Mueller, P. H. N. de With, T. Wiegand, ―The effects of multiview depth video compression on multiview rendering‖, Signal Processing: Image Communication, January 2009.

(9)

10 A. Boev, D. Hollosi, A. Gotchev, K. Egiazarian, ―Classification and simulation of stereoscopic artifacts in mobile 3DTV content‖, SPIE Proceedings Vol. 7237, Stereoscopic Displays and Applications XX, Andrew J. Woods;

Nicolas S. Holliman; John O. Merritt, (Editors), 2009, 12 pages, page 72371F.

11 Y. Qingxiong, Y. Ruigang, J. Davis, and D. Nister, "Spatial-Depth Super Resolution for Range Images," in CVPR'07, 2007.

12 J. Kopf, M. Cohen, D. Lischiski, M. Uyttendaele, ‘Joint Bilateral Upsampling‘, in Proc. SIGGRAPH conf., ACM Trans. Graphics, 26(3), 2007.

13 A. K. Riemens; O. P. Gangwal; B. Barenbrug; R.-P. M. Berretty, ‗Multistep joint bilateral depth upsampling‘, in Proceedings of SPIE, Vol. 7257, Visual Communications and Image Processing 2009, Majid Rabbani; Robert L.

Stevenson, Editors, pp. 72570M.

14 S. Smirnov, A. Gotchev, and K. Egiazarian, "Method for Restorations of Compressed Depth Maps: A Comparative Study," in VPQM2009, 2009.

15 V. Katkovnik, K. Egiazarian, and J. Astola, Local Approximation Techniques in Signal and Image Processing.:

SPIE Publications, 2006.

16 C. Tomasi and R. Manduchi, "Bilateral Filtering for Gray and Color Images," in IEEE International Conference on Computer Vision, Bombay, 1998.

17 G. Zhang, J. Jia, T. Wong, H. Bao, Consistent Depth Maps Recovery from a Video Sequence. IEEE Trans. Pattern Anal. Mach. Intell. Vol. 31, No.6, pp. 974-988 (2009).

18 C. Cigla, and A. A. Alatan, Temporally consistent dense depth map estimation via Belief Propagation, in Proceedings of 3DTV-CON 2009, 4-6 May 2009, Potsdam, Germany.

19 A. Foi, V. Katkovnik, and K. Egiazarian, "Pointwise Shape-Adaptive DCT for High-Quality Denoising and Deblocking of Grayscale and Color Images," IEEE Trans. Image Process., vol. 16, no. 5, pp. 1395-1411, 2007.

20 D. Scharstein and R. Szeliski, "A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,"

International Journal of Computer Vision, vol. 47, pp. 7-42, April-June 2002.

21 S. Baker, D. Scharstein, J.P. Lewis, S. Roth, M. Black, and R. Szeliski, "A database and evaluation methodology for optical flow," in Proc. IEEE Int’l Conf. on Computer Vision, (ICCV 2007), Rio de Janeiro, Brazil, October 2007.

22 D. Scharstein and R. Szeliski. Middlebury Stereo Vision Page. [Online]. http://vision.middlebury.edu/stereo/

23 Kuk-Jin Yoon and In-So Kweon, "Locally Adaptive Support-Weight Approach for Visual Correspondence Search,"

in Conference on Computer Vision and Pattern Recognition, 2005, pp. 924 – 931.

24 O. Stankiewicz, K. Wegner, ―Depth Map Estimation Software version 3‖, ISO/IEC JTC1/SC29/WG11 MPEG/M15540 July 2008, Hannover, Germany.

(10)

Figure 7 Comparative results of filtering approaches as in Experiment 1

0 5 10 15 20 25 30

34 36 38 40 42 44 46 48 50 52 54

Frame

PSNR

Cones

0 5 10 15 20 25 30

10 20 30 40 50 60 70 80

Frame

BAD

Cones

0 5 10 15 20 25 30

0 10 20 30 40 50 60 70 80 90 100

Frame

CONSIST

Cones

0 5 10 15 20 25 30

20 30 40 50 60 70 80

Frame

BAD near discontinuities

Cones

0 5 10 15 20 25 30

18 20 22 24 26 28

Frame

PSNR of Virtual Channel

Cones

Noise-Free Estimate Noisy Estimate

Noisy Estimate + Median(5 frm) Noisy + Hypotesis + Median(5 frm) Noisy + Hypothesis

Noisy + Video Hypothesis (3 frm) Noisy + Video Hypothesis (5 frm)

0 5 10 15 20 25 30

0.075 0.08 0.085 0.09 0.095 0.1 0.105 0.11 0.115 0.12

Frame

Normalized RMSE

Cones

(11)

Figure 8 Comparative results of filtering approaches as in Experiment 2

5 10 15 20 25 30

54.5 55 55.5 56 56.5 57 57.5

Frame

PSNR

Cones

5 10 15 20 25 30

3 4 5 6 7 8 9 10

Frame

BAD

Cones

5 10 15 20 25 30

3 3.2 3.4 3.6 3.8 4

Frame

CONSIST

Cones

5 10 15 20 25 30

20 22 24 26 28 30 32 34 36 38

Frame

BAD near discontinuities

Cones

5 10 15 20 25 30

25.5 26 26.5 27 27.5 28

Frame

PSNR of Virtual Channel

Cones

Noisy Estimate Noisy + Hypothesis

Noisy + Hypotesis + Median (7 frm) Noisy + Video Hypothesis (7 frm)

5 10 15 20 25 30

0.068 0.07 0.072 0.074 0.076 0.078 0.08

Frame

Normalized RMSE

Cones

(12)

Figure 9 Results of filtering of 'Book arrival' depth sequence. From left to right: originally-estimated depth; median-filtered; filtered by proposed approach

Figure 10 Filtering of compressed depth sequence. From left to right: decompressed depth map; decompressed dep map filtered by proposed approach

Viittaukset

LIITTYVÄT TIEDOSTOT

(BFGs) as a cost- and time-efficient, and meaningful way to increase bryophyte data resolution, allow measurement of change in bryophyte communities in response to

This book is essential for anyone working with newsroom ethnography as a method, and its in-depth and carefully layered analysis is a must for anyone interested in the socio-

f) Effect of external resistance connected to the rotor of a wound-rotor induction motor on its speed-torque profile. The magnetic circuit of Fig. The depth of the core is 5 cm.

Tornin värähtelyt ovat kasvaneet jäätyneessä tilanteessa sekä ominaistaajuudella että 1P- taajuudella erittäin voimakkaiksi 1P muutos aiheutunee roottorin massaepätasapainosta,

Lannan käsittelystä aiheutuvat metaanipäästöt ovat merkitykseltään vähäisempiä kuin kotieläinten ruoansulatuksen päästöt: arvion mukaan noin 4 prosenttia ihmi- sen

Työn merkityksellisyyden rakentamista ohjaa moraalinen kehys; se auttaa ihmistä valitsemaan asioita, joihin hän sitoutuu. Yksilön moraaliseen kehyk- seen voi kytkeytyä

Onko tulkittava niin, että kun Myllyntaus ja Hjerppe eivät kommentoineet millään tavalla artikkelini rakennetta edes alakohtien osalta, he ovat kuitenkin

awkward to assume that meanings are separable and countable.ra And if we accept the view that semantics does not exist as concrete values or cognitively stored