Characterizations for Matrices of Dominant Support Parameters in Soft Sets

The matrix of dominant support parameters plays an important role in solving the normal and pseudo parameter reduction problems for soft sets. This article aims to make a fundamental investigation on the properties of matrix of dominant support parameters. Firstly, we obtain some basic structural and quantitative properties. Then we give some retrieving algorithms and filling algorithms for computing the initial soft sets and the matrix itself by using only part of the matrix. Next we propose some characterization theorems to check which kind of set-valued matrices can be induced by a soft set as its matrix of dominant support parameters. Finally, we make a comparison between the matrix of dominant support parameters and the soft discernibility matrix. It’s shown that the matrix of dominant support parameters has its own characteristic and can represent the soft discernibility matrix in a simple way. An alternative and simple procedure for computing the order relations with the matrix of dominant support parameters is brought in.


I. INTRODUCTION A. SOFT SET AND CHOICE VALUE
In 1999 Molodtsov initiated the theory of soft set, which represented a new mathematical tool for dealing with uncertainties and vagueness [1]. The soft set theory has been studied algebraically [2]- [11] and topologically [12]- [15], and it has also been combined with types of vague concepts such as fuzzy model set [16]- [24] and rough set [25]- [27]. A hypersoft set was studied in [28]. The theory of soft sets has been shown useful for decision making in various fields [18], [29]- [34].
A soft set can be regarded as a 0-1 valued information system [35]. It can be represented by a 0-1 valued tabular or a 0-1 valued matrix. In a soft set over U , the choice value of an object has been defined as its number of supporting parameters, i.e., the sum of its corresponding row in the tabular or matrix representation of the soft set [1]. The dicision making scheme of a soft set is to give a rank of the The associate editor coordinating the review of this manuscript and approving it for publication was Josue Antonio Nescolarde Selva . objects by the choice values of the objects. The object with maximum number of supporting parameters is the dicision making result.

B. NORMAL PARAMETER REDUCTION OF SOFT SET AND THE MATRIX OF DOMINANT SUPPORT PARAMETERS
When there exist lots of parameters in a soft set, we need to figure out a kind of subsets of parameters, which is named as a normal parameter reduction [36]. Each combination of parameters of this kind contributes the same to each object. That is to say, once we have deleted this subset of parameters, each object should lose the same amount for the choice value. As a result, the rank of objects will not change.
It is important to give optimal algorithms for normal parameter reductions of soft sets. Many researchers have made contributions to this problem [37]- [40]. A method for integrating all normal parameter reductions of a soft set into a propositional logic formula is proposed in [41]. Ma et al. [42] pointed out an important property of normal parameter reduction of soft sets, by which the workload for finding candidates can be reduced.
The matrix of dominant support parameters was brought in and investigated in [41], [43], [44]. It has been proved in [43] that the parameter reduction problems of soft sets can be translated as 0-1 linear programming problems. It was shown that by using part of the matrix of dominant support parameters [43], the conditions for a normal or pseudo parameter reduction can be represented by some linear constraints among local parameters. 0-1 information systems do appear in many fields. It can be used to record information in computer systems. Black and white pictures can be represented by 0-1 matrices or tables. In formal conceptual exploration [45], a formal context (U , A, I ) can be represented by a 0-1 information system. In rough set theory [46], if an information system (U , A, D, f ) satisfies: ∀a ∈ A, |D a | = 2, i.e., D a is two-valued, then such an information system can be shown as a 0-1 information system. In graph theory [47], a classical graph (V , E) can be represented by a 0-1 table or matrix too, which is named as adjacency matrix. In soft set theory, we often use 0-1 table to represent a soft set. However, we must make it clear that given the same 0-1 information system, it may have different meanings or senses in the corresponding areas. What's more, confronted with different fields, we have different aims or research goals or applications. As a result, there exist different methods for dealing with the related 0-1 information systems.
In rough set theory, given a 0-1 valued information system, we can define its discernbility matrix D [48]. For an arbitrary entry D(i, j), it records the set of attributes, each one of which can be used to distinguish the object u i and u j . For general information systems, we can also define its discernbility matrix D. And the discernbility matrix D is very important for the attribute reduction problems of rough sets. In ordered information system, a dominance relation was proposed by [49]. In formal concept analysis theory, the 0-1 information system is called the relation matrix. It plays an important role in the attribute reduction problem of concept lattice theory.
The matrix of dominant support parameters for a soft set is different from the discernbility matrix (or dominance relation) of a general (ordered) information system [49]. For each pair of objects, it not only gets rid of the parameters for which the objects have the same value, but also makes a detailed classification for the left ones. This operation helps us to get the essence for the parameter reduction problems. In other words, the matrix of dominant support parameters maintains the advantage information of each object.
Using the idea of discernbility matrix, [50] proposed similar concepts such as soft discernibility and weighted soft discernibility in soft sets, which was shown useful in the decision making of soft set. We will make a detailed analysis between the matrix of dominant support parameters and soft discernibility matrix in Section V.

D. MAIN QUESTIONS TO BE INVESTIGATED
The parameter reduction problems are quite different from that of information systems in rough set theory or that of formal concept analysis. So it becomes an important task for researchers to learn and develop these important properties or characterizations for matrices of dominant support parameters. It has been discussed in [43], but it is not enough or systematic. A lot of questions need to be investigated. We make a list of them as follows: (1) From the point view of knowledge representation, what's the logical relationship among the entries of the matrix of dominant support parameters?
(2) Given a matrix of dominant support parameters, how can we figure out the initial soft set?
(3) Given a set-valued square matrix, how can we check or determinate whether it is the matrix of dominant support parameters of a certain soft set?
These questions are fundamental but important for the development of soft set theory. The remainder of this article is organized as follows. Section II introduces basic concepts such as the soft set and the matrix of dominant support parameters. Then fundamental structural and quantitative properties are proposed in section III. With these properties we can learn much more from the matrix of dominant support parameters. In section IV, algorithms for retrieving the soft set itself by using the given matrix of dominant support parameters are given, and then the logical formulas for the relationship among the entries are thoroughly discussed. At last, characterization theorems for the matrix of dominant support parameters are brought in. In section V we will make a comparison between the matrix of dominant support parameters and the soft discernibility matrix. Finally, we come to a conclusion of this article and outlook for potential future work.

II. PRELIMINARIES
In this article, suppose U = {u 1 , u 2 , · · · , u n } is a finite set of objects, E is a set of parameters. For example, the attributes in information systems can be taken as parameters. ℘(U ) means the power set of U , |A| means the cardinality of set A. If B ⊆ A, then B C means the complementary set of B. By [1] and [43] we have basic concepts about soft sets shown in Definition 2.1 and Definition 2.2.
, ∀e ∈ A, F(e) means the subset of U corresponding with parameter e. We also use F(u, e) = 1 (F(u, e) = 0) to mean than u is (not) an element of F(e), i.e., u ∈ F(e) (u ∈ F(e)).
Definition 2.2 (Support Set of Parameters for Objects): Let S = (F, A) be a soft set over U . ∀u ∈ U , define the support set of parameters for u as the set {e ∈ A|F(u, e) = 1}, denoted by supp(u).  We write σ S as σ for short if the underlying soft set S is explicit.
Example 2.1 [43]: TABLE 1 represents a soft set S = (F, E) over objects domain U = {u 1 , u 2 , · · · , u 6 } and param- Definition 2.4 [41], [43], [44] (Dominant Support Param- Example 2.2: Consider the soft set S = (F, A) given in Table 1. By Definition 3.1 and Fig. 1 it is easy to get that Definition 2.5 [43] (Matrix of Dominant Support Parameters): Given a soft set S = (F, A) over U , |U | = n. We call matrix D S = [D i←j ] n×n as the matrix of dominant support parameters for soft set S. We will also use D S (i, j) for D i←j . Example 2.3 [43]: Consider the soft set S = (F, A) given in Table 1, then by Definition 3.2 we have D S which is shown in Fig.1 (here for the convenience of readers we list the objects u i , i = 1, 2, · · · , 6 ).

A. STRUCTURAL PROPERTIES OR RELATIONS FOR ENTRIES OF MATRIX OF DOMINANT SUPPORT PARAMETERS
By the lemma 3.1 in [43], we have the following Theorem 3.1. We add its proof and summarize the properties here. As a result, it becomes more systematically together with other properties in this subsection.
For a better understanding of the above three fundamental properties, by using the soft set given by Example 2.1 and its matrix of dominant support parameters shown in Example 2.3, we give three figures Fig.1 to Fig. 3. (i) (Identical-Column-Row Disjoint Property) once one parameter appears in the i th column, then it can't appear in the i th row, and vice versa, i.e., ∀i = 1, 2, · · · , n,     (iii) (Identical-Column-Row Partition Property) ∀i = 1, 2, · · · , n, ( n k=1 D S (i, k)) and ( n k=1 D S (k, i)) is a partition of E − {e|F(e) = ∅ or F(e) = U }.
and   Corollary 3.1 comes specially to symmetrical positions. It tells us that for arbitrary pair of symmetrical entries D S (i, j) and D S (j, i), D S (i, j) is not only disjoint with D S (j, i) but also disjoint with any entries on the j th row or i th column; and D S (j, i) is not only disjoint with D S (i, j) but also disjoint with any entries on the i th row or j th column. In Fig. 5, D(4, 3) = {e 2 , e 7 }, it is easy to see that {e 2 , e 7 } (The one framed with solid wire) is disjoint with any entry in the 4th column which is surrounded by dotted line.
The Submatrix Diagonal Vertices Rule tells us that if e appears both in one pair of diagonal vertices of a submatrix, then e also appears both in the other pair of diagonal vertices of the same submatrix. See Fig. 6 for an example. We see that e 5 ∈ D(3, 4) ∩ D(6, 5), then consider the subdiagonal line of the submatrix which consists of D (3,4), D (6,5), D(6, 4), D(3, 5) (corresponding to these entries with colored elements). We can see that e 5 appears in both D (6,4) and D (3,5).
According to the Theorem 3.3, the following corollary 3.2 and corollary 3.3 can be induced.

B. QUANTITATIVE PROPERTIES FOR THE ENTRIES OF THE MATRIX OF DOMINANT SUPPORT PARAMETERS
Given a soft set S = (F, A) over U , |U | = n. D S is the matrix of dominant support parameters. In this subsection we will bring in some definitions and then discuss their properties.
Corollary 3.5: Given a soft set S = (F, A) over U , |U | = n. D S is the matrix of dominant support parameters, ∀e k ∈ A, then ∀k = 1, 2, · · · , m,    especially for e 3 when we remove all other parameters in D S . It can be checked that ND S (e 3 , i, j) where e 3 ∈ D S (i, j) satisfies Corollary 3.8. (3.17)

IV. APPLICATIONS AND CHARACTERIZATIONS OF D S WITH CERTAIN ENTRIES OF THE MATRIX OF DOMINANT SUPPORT PARAMETERS
In this section, all soft sets mentioned have no ∅ or U approximations. This is to say the tabular representations of these soft sets have no one column which has only 0 or has only 1.

A. RETRIEVING OF SOFT SETS WITH JTH ROW AND JTH COLUMN THE MATRIX OF DOMINANT SUPPORT PARAMETERS
As shown in the above section, the entries of the matrix of dominant support parameters for the same soft set have connections. Actually, we need only part of these entries to regain the initial soft set.
Since it is assumed that F(e k ) = ∅ and F(e k ) = U , K =i e k ∈ D 1←K . If F(u 1 , e k ) = 0, then the logical value of formula K =i e k ∈ D 1←K is equal to 0, F(u i , e k ) = 0. Similarly, we can also use the Jth row and the Jth column of D S for regaining soft set S = (F, A).  The retrieving algorithms enable us to construct soft sets or 0-1 ordered information systems which are supposed or required to be under certain constraints. It's very interesting to construct different distribution of subsets of parameters on the first row and the first column, and see what kind of information systems we get.   Fig. 11, where A = {1, 2, · · · , 14}, U = {u 1 , u 2 , · · · , u 8 }.

B. FILLING ALGORITHMS OF D S WITH PART OF THE MATRIX OF DOMINANT SUPPORT PARAMETERS
According to the Definition 2.5 and the Retrieving Algorithm 2, we can get the following algorithm which can be used to compute the rest of D S with the Jth row and the Jth column of D S : The Filling Algorithm 1 for D S with the Jth row and the Jth column of D S computes the rest of D S one by one, it costs a lot. It can be proved that ∀i, j, D i←j can be represented by the entries in the 1st row and the 1st column of D S as follows:

Theorem 4.3 (Filling Algorithm 2 for D S With the First Row and First Column of D S ):
Given the 1st row and the 1st column of D S for soft set S = (F, A) over U , then ∀i, j = 1, 2, · · · , |U |, where and j←1 . By e ∈ α, we have two situations: • e ∈ D i←1 . Thus F(u i , e) = 1, F(u 1 , e) = 0. It suffices to show that F(u j , e) = 0. We prove it by contrary. If F(u j , e) = 1, then e ∈ D j←1 . That is a contradiction.
• e ∈ ( |U | k=1 D 1←k − D 1←i ). So F(u 1 , e) = 1 and e ∈ D 1←i . So F(u i , e) = 1. It suffices to show that F(u j , e) = 0. We prove it by contrary. If F(u j , e) = 1, then e ∈ D 1←j . So e ∈ ( |U | k=1 D 1←k − D 1←j ). That's a contradiction. At last, it is easy to check by set theory that Generally, we get   where and

Theorem 4.4 (Filling Algorithm 4 for D S With the First Row and First Column of D S ):
Given the 1st row and the 1st column of D S for soft set S = (F, A) over U , then ∀i, j = 1, 2, · · · , |U |, . Otherwise e ∈ D 1←j − D 1←i , thus we have two possible situations: • e ∈ D 1←j , then we have F(u 1 , e) = 0, so e ∈ (D i←1 − D j←1 ).
• e ∈ D 1←j and e ∈ D 1←i , then we have F(u 1 , e) = 1, F(u i , e) = 0, that's a contradiction, since we have e ∈ D i←j . Generally, the following corollary is true:  = (F, A) over U , then ∀i, j = 1, 2, · · · , |U |, we have (iii) ∀i, j = 1, 2, · · · , |U |, the following equality holds: Proof: Theorem 4.7 is implied by Theorem 3.1, Theorem 3. ( Proof: First we need to divide A into two parts, and we have 2 |A| ways for doing this. Each way can be represented by a pair B ⊆ A (appearing on the first column) and B C ⊆ A (appearing on the first row). For the first column, there exist (2 |U |−1 − 1) |B| ways. Similarly, for the fist row, there are (2 |U |−1 − 1) |B C | ways. So in total, Corollary 4.8: Suppose U is a set of objects, A is a set of parameters. D is a randomly generated set-valued matrix of size |U | × |U | with D(i, j) ∈ 2 A , then the probability of D ∈ D S (U , A) is equal to (4.13)

V. APPLICATIONS OF THE MATRIX OF DOMINANT SUPPORT PARAMETERS IN REPRESENTING THE SOFT DISCERNIBILITY MATRIX AND AN ALTERNATIVE ALGORITHM FOR COMPUTING THE ORDER REATIONS OF S
In this section we want to make a comparison between the matrix of dominant support parameters and the soft discernibility matrix in soft set theory given in [50]. We will show that the matrix of dominant support parameters can represent the soft discernibility matrix in a simple way and provide an alternative procedure for computing the order relation by choice values.

A. THEORIES FOR DECISION MAKING WITH THE MATRIX OF DOMINANT SUPPORT PARAMETERS OF SOFT SET S
Example 5.1: In order to make it clear, we will give our idea by a soft set example (The soft set in Example 1 and shown in Table 1 of [50]), where U = {h 1 , h 2 , · · · , h 6 }, E = {e 1 , e 2 , · · · , e 7 }, see  TABLE 7, TABLE 8 and D S give us a clear and intuitive comparision among the concepts discernibility matrix, soft discernibility matrix and the matrix of dominant support parameters. For information system, the discernibility matrix take 0 and 1 just as different symbols, which are used to distinguish the objects. The soft discernibility matrix took the form of discernibility matrix as foundation. But it pays attention to the order 1 > 0 and represents these relation by addding well-defined superscripts. These superscripts in arbitrary entry of TABLE 8 can be divided into two parts. The matrix of dominant support parameters [41], [43] was inspired by the following idea: for each pair of objects u i , u j , we define D S (i, j) to be the set of parameters, for which u i has a high value than u j , i.e., F(u i , e) = 1 and F(u j , e) = 0.
It is easy to draw a close relation between the soft discernibility matrix and the matrix of dominant support parameters: the elements in D S (i, j) ∪ D S (j, i) is equal to the entry D(i, j) (i > j) of the soft discernibility matrix. In a word, the matrix of dominant support parameters divide D(i, j) into two parts which correspond to the different superscripts. That is e t s ∈ D(i, j) belongs to D S (i, j) if and only if t = i.
[50] gave a method for finding the order relation among objects by the Definition 2.3 with the soft discernibility   matrix itself. It's an useful result. We list its algorithm as follows in TABLE 10, but we refer to [50] for more details.
Now we try to propose another method for getting the order relation of objects with the matrix of dominant support parameters itself.  |i = 1, 2, · · · , K , K <= m} is the set of classifications according to [50], i.e., ∃n, h i , h j ∈ C n if and only if ∀k = 1, 2, · · · , |E|, TABLE 9. Algorithm 1 for decision making based on soft discernibility in [50]. F(h i , e k ) = F(h j , e k ). ∀i = 1, 2, · · · , K , define and where |C| means the number of elements in C.
Proof: We can take a pair of objects for example. Assume h 1 ∈ C 1 , and h 2 ∈ C 2 and it suffices to show that ∀e ∈ E, we can get four situations as follows: (1) When F(h 1 , e) = 1, F(h 2 , e) = 1, e contributes the same value to GET (1) and GET (2). And e contributes nothing to LOSE(1) and LOSE (2).
(4) When F(h 1 , e) = 0, F(h 2 , e) = 1, i.e, e ∈ D(2, 1), e contributes an advantage value |C| for C 2 over C 1 . So It can be proved in the same way for an arbitrary pair of objects from different classes. The proof is over.
The matrix in expression (5.5) and the matrix in Fig. 15 give test examples for Theorem 5.1 with the soft set in TABLE 6.
According to [43], [50], it's easy to see that So by Theorem 5.1 we have the following corollary.
Corollary 5.1:   Compared with the Algorithm in [50] (given in TABLE 9), Algorithm 2 makes use of the matrix of dominant support parameters in a quantitative way. It is an alternative method for retrieving the order relations of S by D S itself. It is simple and much more direct.
By the Filling Algorithm 4, once we have only the first row and the first column entries in matrix of dominant support parameters, we can firstly compute D S itself, and then use Algorithm 2 to get the decision making order. In this subsection we will show the results of comparative experiments among Algorithm 1, Algorithm 2 and Algorithm 3 .

1) EQUIPMENT AND DATA GENERATION METHOD
(i) Our experiments are performed on PC with AMD Ryzen 5 3500U 2.10GHz CPU, 8G RAM and Win10 professional operating system.
(ii) Our data are generated in the following way: Firstly, we use the rand function of MATLAB to generate a uniformly distributed matrix of random numbers in the [0,1] interval. Then the number less than or equal to N in the matrix is changed to 1 and the rest of the numbers are changed to 0. So we can get a matrix with a ratio of 1 to N .

2) MAIN CONTROLLING PARAMETERS IN OUR EXPERIMENTS
(i) The number of rows, i.e., the number of objects |U |.
(ii) The number of columns, i.e., the number of parameters |A|.
(iii) The ratio of value 1, i.e., the proportion of 1 values over |U | × |A| As to FIGURE 17, we have a similar result with FIGURE 16 shows. Fig. 18 gives the contents that have been discussed in this article.

A. THE MAIN RESULTS OF THIS PAPER
(1) The fundamental structural and quantitative properties are investigated, and these properties can help us in having a better understanding of the matrix of dominant support parameters.
(2) By using only part of the matrix of dominant support parameters, we can recompute the initial soft sets and fill the rest of the matrix itself. These algorithms are important from the aspect of knowledge representation and data mining.
(3) The proposed characterization theorems are helpful. With them we can judge which kind of set-valued matrices can be the matrices of dominant support parameters for certain soft sets or information systems.

B. LIMITATIONS OF OUR THEORY
(1) The matrix of dominant support parameters does not contain these parameters whose corresponding approximations are equal to ∅ or U . This problem should be solved. A potential way is to add the related parameters into the main diagonal of the matrix.
(2) The Algorithm 1 computes D 1 and D 2 . If D 2 = ∅, then it is implied than all the choice values of subjects are odd numbers or all the choice values of objects are even numbers. That is, if ∃u i , u j , i = j, σ (u i ) is odd and σ (u j ) is even, then D 2 = ∅. Our algorithms don't involve such information. So they need to be further improved.
(3) From the experimental results, we can we see that Algorithm 3 costs much longer than Algorithm 2 and Algorithm 1 do. So we need to consider the following questions: (i) Do we have to retrieve all the entries of the matrix? (ii) Which ones should we retrieve and in which order?

C. FUTURE WORK
In the near future, we will make more research on the matrix of dominant support parameters. For example, as a future possible research direction we will upgrade the matrices of dominant support parameters in hypersoft sets [28] or soft sets combined with fuzzy set theory [16]- [25]. And we also will investigate the areas in which our theory and methods can be useful.
BANGHE HAN received the B.S. degree in mathematics and applied mathematics, the M.S. degree in uncertainty reasoning, and the Ph.D. degree in computational intelligence from Shaanxi Normal University, Xi'an, Shaanxi Province, China, in 2004China, in , 2007China, in , and 2011 From 2009 to 2015, he was a Lecturer with the School of Mathematics and Statistics, Xidian University, Xi'an, where he has been an Associate Professor since 2015. His research interests include uncertainty reasoning theories such as fuzzy sets, fuzzy logic, soft sets, rough sets, and so on. His awards include the First Prize of the Excellent Paper Award for young people of the Shaanxi Mathematics Association in 2014 and the Second Prize of the Xi'an Science and Technology Progress Award in 2017.
XINYU NIE is currently a junior with the School of Electronic Engineering, Xidian University. She has been engaged in research and writing problems related to soft sets of parameter reduction problems for one year, and has a strong interest in mathematical research and mathematical modeling. Her research interests include electronic circuit design and electromagnetic field and microwave technology.
Ms. Nie received the Systematic Training in mathematical modeling, and participated in four modeling college mathematics competitions, during two years in university. At the same time, as a member of the thesis, she is participating in her fifth mathematical modeling competition. She also took part in the Internet + and Challenge Cup and has achieved preliminary results.
RUIZE WU is currently a junior with the School of Physics and Optoelectronic Engineering, Xidian University. He has been working on programming problems related with parameter reduction problem soft set for one year and has a strong interest in mathematical research and mathematical modeling. He likes to discover hidden patterns in mathematical problems. His research interests include electromagnetic wave propagation and antenna design. Mr. Wu received scientific research training by participating in three undergraduate mathematical contests in modeling during his two years in college, and he is participating in his fourth mathematical modeling contest as the Team Leader. He also participated in the China Undergraduate Physics Tournament (CUPT) and was promoted to the regional competition.
SHENGLING GENG was born in Qinghai, China. She received the B.S. and M.S. degrees from Qinghai Normal University, and the Ph.D. degree from Shaanxi Normal University.
She is currently a Professor with the School of Computer Science, Qinghai Normal University. She also works in the Academy of Plateau Science and Sustainability in Qinghai Province, China. Her research interests include soft computing, data mining, soft set theory in decision making, and intelligent control. She has presided over and completed one national natural fund project, three provincial scientific research projects, and one preliminary research project of national major basic research (973 Program). She has received two times of the Third Prize of the Provincial Science and Technology Progress Award in Qinghai Province.