Complexity and Cognitive Computing
Lourdes Mattos Brasil1 3 , Fernando Mendes de Azevedo1 , Jorge Muniz
Barreto2 , and Monique Noirhomme- Fraiture3
;
Dept. of Electrical Engineering, Federal University of Santa Catarina
University Campus, Florianopolis, Brazil, 88040-900
flourdesy , [email protected]
Dept. of Informatics and Statistics, Federal University of Santa Catarina
University Campus, Florianopolis, Brazil, 88040-900
[email protected]
3
Institut d'Informatique, FUNDP,
Rue Grandgagnage, 21 B-5000 Namur (Belgium)
flma, [email protected]
1
2
Abstract. This paper has as main goal to develop a hybrid expert sys-
tem to minimize some of the complexity problems related to articial
intelligence eld. For instance, we can mention: the so-called bottleneck
of expert systems, e.g., the knowledge elicitation process; the choice of
the model for the knowledge representation to code human reasoning;
In relation to the connectionist approach, the number of neurons in the
hidden layer and the topology used; The diculty to obtain the explanation on how the network arrived to a conclusion. So, we integrated the
cognitive computing in our system to overcome these diculties.
1 Introduction
In the last decade, a new area has emerged without the Articial Intelligence
(AI) eld, e.g., the so-called cognitive computing. We assumed in this work that
cognitive computing is a collection of emerging information technologies inspired by the qualitative nature of biologically based information processing which
is found in the nervous system, human reasoning, decision making, and natural selection. Cognitive computing draws on the new discoveries being made in
neuroscience, cognitive science, and biology as well as the rapid advances underway in computing technology. Cognitive computing also draws on the wealth of
existing experience in statistics, classical control theory, signal and image processing, and AI. While, traditional computing technologies are quantitative in
nature and emphasize precision and sequential order, cognitive computing tries
to exploit the tolerance for imprecision, uncertainty and partial truth found in
biological systems to achieve tractability, robustness and low cost solutions to
engineering problems [1].
y
Currently at the Institut d'Informatique, FUNDP, Namur, Belgium: the work sponsored by CAPES, Brazil.
The body of information technologies that make up cognitive computing
comes from research in several related and emerging areas and generally includes
Articial Neural Networks (ANN), fuzzy logic, and evolutionary computation.
Some denitions of cognitive computing also include probabilistic reasoning and
chaos theory [1]. It is important to note that cognitive computing is a partnership
of technologies where each partner is complementary and contributes a distinct
methodology for addressing problems in its domain. ANN use the interactions
of biological neurons as a model for pattern recognition, decision, modeling,
and forecasting. Fuzzy logic uses approximate information in a manner similar
to the human decision process and is useful in control and decision making
applications. Evolutionary computation is modeled on the biological process of
natural selection and evolution and is useful in optimization [2][3]. Therefore, we
developed a Hybrid Expert System (HES) with help of this tool. This system
consists of a Neural Network Based System Expert (NNES), a Rule Based Expert
System (RBES) and, an Explanatory Expert Systems (EES).
2 Articial Neural Network, Fuzzy Logical, and
Evolutionary Computation
2.1 Articial Neural Network (ANN)
Also referred to as connectionist architectures, connectionist paradigms, parallel
distributed processing, and neuromorphic systems, an ANN is an informationprocessing paradigm inspired by the way the densely interconnected, parallel
structure of the mammalian brain processes information [4]. ANN are collections
of mathematical models that emulate some of the observed properties of biological nervous systems and draw on the analogies of adaptive biological learning.
The key element of the ANN paradigm is the novel structure of the information
processing system. It is composed of a large number of highly interconnected
processing elements that are analogous to neurons and are tied together with
weighted connections that are analogous to synapses [5].
Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANN as well. Learning
typically occurs by example through training, or exposure to a truthed set of input/output data where the training algorithm iteratively adjusts the connection
weights (synapses). These connection weights store the knowledge necessary to
solve specic problems.
Although ANN have been around since the late 1950's, it was not until the
mid-1980's that algorithms became sophisticated enough for general applications. Today ANN are being applied to an increasing number of real-world problems of considerable complexity. They are good pattern recognition engines and
robust classiers, with the ability to generalize in making decisions about imprecise input data. They oer ideal solutions to a variety of classication problems
such as speech, character and signal recognition, as well as functional prediction
and system modeling where the physical processes are not understood or are
highly complex. ANN may also be applied to control problems, where the input
variables are measurements used to drive an output actuator, and the network
learns the control function.
The advantage of ANN lies in their resilience against distortions in the input
data and their capability of learning. They are often good at solving problems
that are too complex for conventional technologies (e.g., problems that do not
have an algorithmic solution or for which an algorithmic solution is too complex
to be found) and are often well suited to problems that people are good at
solving, but for which traditional methods are not [5].
There are multitudes of dierent types of ANN. One of the more popular
include the multilayer perceptron which is generally trained with the backpropagation of error algorithm [6]. In fact this ANN kind with the training algorithm
inspired in the classical backpropagation one is really what we used in the develeping of the HES. We will describe it in the next sections.
2.2 Fuzzy Logical
Many decision-making and problem-solving tasks are too complex to be understood quantitatively, however, people succeed by using knowledge that is imprecise rather than precise. Fuzzy set theory, originally introduced by Lot Zadeh in
the 1960's, resembles human reasoning in its use of approximate information and
uncertainty to generate decisions. It was specically designed to mathematically
represent uncertainty and vagueness and provide formalized tools for dealing
with the imprecision intrinsic to many problems. By contrast, traditional computing demands precision down to each bit. Since knowledge can be expressed
in a more natural by using fuzzy sets, many engineering and decision problems
can be greatly simplied.
Fuzzy set theory implements classes or groupings of data with boundaries
that are not sharply dened (i.e., fuzzy) [7]. Any methodology or theory implementing crisp denitions such as classical set theory, arithmetic, and programming, may be fuzzied by generalizing the concept of a crisp set to a fuzzy
set with blurred boundaries. The benet of extending crisp theory and analysis methods to fuzzy techniques is the strength in solving real-world problems,
which inevitably entail some degree of imprecision and noise in the variables
and parameters measured and processed for the application. Accordingly, linguistic variables are a critical aspect of some fuzzy logic applications, where
general terms such a large, medium, and small are each used to capture a range
of numerical values. While similar to conventional quantization, fuzzy logic allows these stratied sets to overlap (e.g., a 60 kilogram woman may be classied
in both the large and medium categories, with varying degrees of belonging or
membership to each group). Fuzzy set theory encompasses fuzzy logic, fuzzy
arithmetic, fuzzy mathematical programming, fuzzy topology, fuzzy graph theory, and fuzzy data analysis, though the term fuzzy logic is often used to describe
all of these [8][9].
Fuzzy logic emerged into the mainstream of information technology in the
late 1980's and early 1990's. Fuzzy logic is a departure from classical Boolean
logic in that it implements soft linguistic variables on a continuous range of
truth values which allows intermediate values to be dened between conventional
binary. It can often be considered a superset of Boolean or crisp logic in the way
fuzzy set theory is a superset of conventional set theory. Since fuzzy logic can
handle approximate information in a systematic way, it is ideal for controlling
nonlinear systems and for modeling complex systems where an inexact model
exists or systems where ambiguity or vagueness is common. A typical fuzzy
system consists of a rule base, membership functions, and an inference procedure.
Fuzzy Rules are of the form IF.. .THEN. . ., where both IF and THEN terms are
natural language expressions of some fuzzy classes or their combinations. Fuzzy
logic provides powerful computational techniques for manipulations with these
classes aimed at specic problem-solving. So that, the data of inputs/outputs
of the HES approached here use also this tool type to develop it. In the next
sections we will describe it better.
2.3 Evolutionary Computation
Evolutionary computation mimics the processes of biological evolution with its
ideas of natural selection and survival of the ttest to provide eective solutions
for optimization problems. The rst approach to evolutionary computation was
the Genetic Algorithm (GA) developed by John H. Holland in the 1960's [3].
The GA uses the concept of solution states encoded as binary-valued strings
where a bit is analogous to a gene and the string is analogous to a chromosome. A
set (i.e., population) of candidate solution states (i.e., chromosomes) is generated
and evaluated. A tness function is used to evaluate each of the solutions in the
population. The chromosomes encoding the better solutions are broken apart and
recombined through the use of genetic operators such as crossover, mutation, and
recombination to form new solutions (i.e., ospring) which are generally better
or more t than the previous iteration (i.e., generation). The process is repeated
until an acceptable solution is found within specic time constraints [10]-[16].
Since this approach is signicantly dierent than other optimization approaches
GA have been successfully applied to optimization problems for which other
approaches have failed.
GA have proven to be well suited to optimization of specic nonlinear multivariable systems and are being used in a variety of applications including scheduling, resource allocation, training ANN, and selecting rules for fuzzy systems.
In our case we use this tool to optimize the number of neurons in hidden layer of
the NNES [16][17]. The following sections present the develeping of this stage.
3 Proposed System
The proposed architecture for ES includes a NNES, a RBES and, an EES. NNES
has been used to implement ES as an alternative manner to RBES. ANN are
made through a big number of units. These units own some properties like natural neurons. Therefore, every unit presents several inputs, some excitatories and
others inhibitories. Moreover, these units take values of each input and generate
an output that is function of the inputs. So, an ANN is characterized by the
units (neurons) and by the way the neurons are connected (topology), as well
as algorithms are used to change weights of the connections (learning rules).
Thus, these three aspects constitute the connectionist paradigm of the AI [18].
The implementation of ES this way are called NNES. These systems are generally developed using a static network with feedforward topology trained by a
backpropagation-like learning algorithm [19]. So, while the basic network represents relations among concepts and connections by way of inferring something
through it, the set of examples will rene NNES. In this last process, the algorithm provides modications not only in the weights of connections, but also in
the network structure. It uses this topology and it generates and/or eliminates
connections that had not been in basic rules. Besides, it can also occasionally
generate more concepts that were not in the basic rules. Therefore, the system
translates as rules the basic rules that the expert was not able to extract. The
basic rules after extraction suers a treatment due to the kind of variables applied as input of the network, where they represent dierent types of concepts,
as quantitative, linguistic, boolean or a combination of them [9]-[16][19]. Moreover, a structural modication of the ANN consists in determining the number
of neurons of the hidden layer using a GA. However the RBES broadly depends
on formal logic as a way of explicit knowledge representation. Our system has
two kinds of data: basic rules and set of examples. The basic rules are used
to create the initial NNES which is rened through the examples. The rened
NNES is then translated in a RBES from which explanations can be obtained.
The model proposed is through fuzzy logic. The theory of fuzzy logic provides a
good mathematical framework to represent imprecise knowledge as in our case.
Finally, the EES of the HES is derived from RBES. It compares the answers
given by the NNES and by the RBES. If the two answers are equals the EES is
triggered and it will give an explanation. Otherwise, it will state the impossibility
of reaching the goal and it will try to suggest how to obtain a suitable solution.
4 Methodology
The main idea is that, in general, the domain expert has diculty in specifying all
rules mainly when imprecision is pervasive to the problem and fuzzy techniques
are to be used. In this case, it is often dicult to choose the membership function.
Nevertheless, he is able to supply examples of real cases. So, the knowledge
engineer use the rules that were supplied by the domain expert to implement
a basic structure of a NNES. After, the NNES is rened through a training
algorithm that uses the set of available examples.
This way, the KA task consists on extracting knowledge of the domain expert.
In our case, the main goal is to minimize the intrinsic diculties of the KA
process. We try to obtain all possible rules from the domain expert in a short
time and also a set of examples.The main goal of this stage is to extract the
knowledge of the expert by rules in a short time such that it can also supply a
series of examples of real cases. Moreover the rules could be improved to add
a way to capture the uncertainties associated with human cognitive processes.
The model proposed uses fuzzy logic. The theory of fuzzy logic provides a great
mathematical framework to represent this kind of knowledge [9]-[16][19].
The next stage is the implementation of the NNES, which uses neurons representing concepts. The rules relating these concepts are used to establish the
topology of the ANN and employ a graphic tool known as AND/OR graphs
helps in developing the basic structure of this system. In other words, AND/OR
graphs, which represent concepts and connections, indicate the number of neurons in the input and output layers. They also show the existence of intermediate
concepts and their connections which are translated in the intermediate layer of
the NNES [10]-[16]. Besides, the NNES also foresees the possibility of dierent
kinds of variables in its input, where they represent dierent types of concepts, as
quantitative, linguistic, or boolean valued or a combination of these [9][19]. That
is way the basic NNES is obtained. At following, we will show the mathematics
model of the neuron which is given by [9],
X(t) = n-dimensional input vector in the ANN or output of neuron exciting
the neuron considered.
X (t) = [x1(t); x2(t); ; x (t); ; x (t)] (1)
y(t) = o(t) = scalar output of each neuron and o(t) 1
N: nonlinear mapping function, X O; x(t) o(t), where:
X : Z+ ; O : Z+ 1
(2)
This mapping can be noted as N and so:
i
n
T
<
n
<
!
<
j!
n
<
( ) = N [X (t) <n ] <1
(3)
o t
Mathematically, the neural nonlinear mapping function N can be divided into
two parts: a function called conuence and a nonlinear activation operation [9].
The conuence function c is the name given to a general function having as
arguments the synaptic weights and inputs. A particular case widely used is the
inner product. This mapping yields a scalar output u(t) which is a measure of
the similarity between the neural input vector X (t) and the knowledge stored in
the synaptic weight vector W (t). So, u(t) 1 and W (t) is given by,
W (t) = [w0(t); ; w (t); ; w (t)] +1
(4)
Redening X (t) to include the bias x0(t) we have:
u(t) = X (t) c W (t)
(5)
In [10]-[16] are the developing matemathics of this stage. In short, for OR
P
neurons, in (5), by replacing the c -operation by the T-operation, and the operation by the S-operation, we get
u(t) = S =1 [w (t) T x (t)] [0; 1]
(6)
<
i
n
i
i
i
n
T
n
<
Then for AND neurons, in (5), by
P replacing the c -operation by the operation
of the algebraic product, and the -operation by the T-operation, we get
u(t) = T =1 [w (t) x (t)] [0; 1]
n
i
i
i
(7)
o(t) = [u(t)] [0; 1]
(8)
Other stage of the HES is the NNES renement one. This is made through
a learning algorithm using the examples of real cases as training set. This learning algorithm allows structural changes of the network through inclusion
and/or exclusion of neurons and/or connections. This approach leads to a localized knowledge representation where neurons represent concepts and connections
represent relations among concepts [10]-[16].
After the basic NNES is obtained, the set of examples serves to validate the
NNES structure. In worst case, the NNES does not represent the knowledge of
the problem. Therefore, it becomes evident that the basic rules extracted from
the expert are not sucient, as expected. So, these same examples are used by
the learning algorithm to rene the NNES. After the renement of the NNES,
a new discussion is made with the domain expert to validate the modications
in the basic structure of the NNES. Thus, a new set of examples is obtained to
test again the NNES. In the case that it has well performed, it is supposed the
NNES represents the proposed goal.
Finally, a reverse process is followed toward the inferring of the if-then rules
together with their membership degrees. So, a RBES is implemented. Following,
the RBES serves as basis for developing other system, the EES, that is supposed
able to explain why the NNES reached a conclusion.
5 Learning Algorithm
The learning algorithm developed is inspired on the classical backpropagation
algorithm [6]. Nevertheless it presents some dierences: optimization of the hidden layer is supported by GA, incorporation of logic operators AND/OR in
place of the weighted and, formation of the NNES by fuzzy logic. This learning
algorithm provides modications not only in the connection weights, but also in
the network structure It generates and/or eliminates connections that had not
been in the fuzzy basic rules that were given by the expert. Moreover it also
can occasionally generate more concepts that were not in the fuzzy basic rules.
So, the system translates as rules the new fuzzy basic rules that the expert was
not able to provide during the development process of the basic NNES, in shape
of new concepts and/or connections. Eventually, the number of neurons in the
hidden layer must be modied in function of the generation and/or elimination
of hidden intermediate concepts. In this case, it is suggested that optimization
in hidden layer can be accomplished using GA [10]-[16].
We choose GA to optimize the size of the hidden layer and determining
weights to be set to zero. This can be justied by the following main facts: it
allows to avoid local optimum and provides near-global optimization solutions
and they are easy to implement. Nevertheless, when it is applied with this goal,
in hidden layer of an ANN, we must take care of respecting a maximum and a
minimum number of neurons of this layer. In fact, too many neurons generally
have as eect a decrease of the generalization capabilities of the network and
implies a long learning phase. On the other hand, too few neurons can be unable
to learn, with the desired precision, the task. So, there is an intermediate number
of neurons that must be put in the hidden layer, to avoid the problems mentioned
above. Then, the ANN must be suciently rich to solve a problem and as it must
also be adequately simple to solve a problem well and not consume long training
[10]-[16].
However, because of the diculty in the analysis of min and/or max operations, the fuzzy ANN training, especially min-max ANN, appears not to be
approachable rigorously and systematically. Therefore, in practice, one tends to
choose bounded-addition and multiplication to replace the min and max operations just to bypass the diculty. Despite the fact that the modied ANN is
readily trainable for its analytical nature, it is functionally very dierent from
the original one. In a sense, the lack of an appropriate analytical tool for the
min and max operations greatly limits their applicability.
In [16][20], the authors have made another attempt in developing a rigorous theory for the dierentiation of min-max functions by means of functional
analysis, and derived the delta rule for training min-max ANN based on the differentiation theory of min-max functions. So, we applied to this backpropagation
algorithm modied by [16] for the training of the NNES.
6 Simulations
In this section, we will provide an example to show that the cognitive computing
is eective in developing of the HES. The case study illustrates the application of
HES to the problem of epileptic crises classication. This data set was supplied
by physician experts, mainly at the University Hospital of Federal University of
Santa Catarina. We obtained about 39 symptoms and 4 diagnostic classes.
Let us give a min-max NNES with three layers, i.e., an input layer, an output
layer, and a hidden layer. Since the range of values of the min-max NNES is
often constrained on [1,-1], the activation function for each neuron is chosen
as the hyperbolic tangent. At following, we describe some simulations for the
example considered. One of them we used: input layer = 6 neurons, output layer
= 3 neurons, hidden layer = 6 neurons, Generation Number (G) = 10, Initial
Population (P) = 8, Gauss Distribution, Crossover Rate (C) = 0.6, Mutation
Rate (M) = 0.1, Learning Rate () = 0.1, Momentum ( ) = 0.7, Tolerance (T)
= 0.1, Maximum Epoches (ME) = 5, and Total Epoches (E) = 50. With this
training data obtained 5 neurons in the hidden layer. As the number of more
important data of a set of patterns was 4, this simulation show us that for a basic
NNES with 6 neurons in the hidden layer it can be minimized until a number
near to more important data obtained of the processing of knowledge extraction.
Following the same data given above, except: ME = 50 and E = 500, show
us that with this training data obtained also 5 neurons in the hidden layer.
Nevertheless it is observed that the quality of the network is improved. In this
case, the value of Relative Variance Coecient (RVC), e.g., RVC = Standard
Deviation/Average Value, decreased, while the tness increased.
In other simulation, we kept almost all variable used in the previous simulations, except to C = 0.8, the number of neurons in the hidden layer was optimized to 4 neurons. We observed who the standard deviation curve reached upper
values with respect to previous simulations. So the dispersion of the variable (tness) in relation to average of this variable (average of the tnesses) increased.
A great occurence of crossover and mutation in some generations happened.
Nevertheless we have made more some simulations where we change, for
example, M = 0.2 and we have kept the same values of other variables used in
the rst simulation. The nal NNES presented 4 neurons in the hidden layer and
when we change only P = 30 the NNES continued had 4 neurons in the hidden
layer. We observe that a bigger diversity in the initial population in chromosomes
layer of a given population of individuals was brought about as by the change
of P values as M values. Moreover we can see in [15][16] other simulations and
other aspects for the problem approached here.
7 Conclusions
The complex systems associated with human activity are often poorly dened.
The cognitive computing provides an eective and ecient way to carry out a
systems analysis of processes in which technological processes and human activities are interdependent. In this paper, a HES, including a RBES and a NNES,
is discussed under the aspects of KA, where the treatment of imprecision is a
possibility of explaining the reasoning to attain a conclusion. Fuzzy sets and
fuzzy logic can be used to express uncertain information in an exact form. With
the help of this model, this methodology has proved, in the preliminary studies
performed, very promising by leading to an easier KA phase than expected if the
KA was performed using symbolic techniques alone. The hybridism, on the other
hand, allows to complement the NNES with explanation of reasoning facilities,
that in most cases are dicult to obtain with a NNES.
The training of the fuzzy NNES was inspired in the classic backpropagation
algorithm, with some alterations? how we already mentioned in the last sections.
Besides it was observed that in the backward pass the error propagation
among the layers has reached expected values. In this work was also reached
one of the limitations of a feedforward network with multiple layers which uses
an algorithm as one of backpropagation. It requires a ativation function that
be nonlinear and be dierentiable. We used in development of this work for this
function the hyperbolic tangent. However when we accomplish the necessary operations in the backward pass there was a problem. How the NNES is AND/OR
this functions are not dierentiable, a priori. Nevertheless we can observe in [20]
that this diculty was attained through of a function called lor function.
Other reached goal was with regard to optimization of the topology to be
adopted for the fuzzy NNES. The optimization of the hidden layer was supported by GA. In maximization case of the hidden layer the adopted solution
considered the following points: the values for crossover and mutation rates were
chosen empirically, e.g., the value of 0.6 for the crossover operator and 0.1 for the
mutation operator. Using these values, the system reached a good performance.
However during the mutation process occurred the creation of a population of
chromosomes codifying a too big hidden layer to be a acceptable solution. To
eliminate this eect it was considered the peculiarities of the example treated. In
minimization case of the hidden layer was considered the following points: given
a example set we can determine which values are more predominant with regard
to others. Then the minimun value of chromosomes generated by the selection
process and genetic operators should be decreased to maximum equal to number
of important data related to set of examples.
Finally, after the optimized network is accomplished the renement it. When
we have obtained the winner network we have as the number of neurons in
the optimized hidden layer as the training NNES. So that we present other
sets of tests for NNES whose nality is to analyse the NNES renement. We
observed who the sets of test patterns presented to NNES the most of them were
recognized. So we hope the knowledge do not be altered with relation to domain
chosen. Besides, we observed also that the complexity problems mencioned in
this work almost all them were achieved with the help of the cognitive computing.
References
1. Johnson, R.C.: What is cognitive computing? Dr. Dobb's Journal (1993).
2. Hedberg, S.: Emerging genetic algorithms. AI Expert. (1994) 25{29.
3. Holland, J.H.: Adaptation in Natural and Articial Systems. MIT Press, Cambridge,
MA (1975).
4. Smith, C.U.M.: The complexity of brain: a biologist's view. Complexity International, Vol. 2 (1995).
5. Hogan, J.M., Diederich, J: Random neural networks of biologically plausible connectivity. Complexity International, Vol. 2 (1996).
6. Rumelhart, D.E, Hinton, G.E., Willians, R.J.: Learning internal representations by
error propagation. In: Rumelhart, D.E., McClelland, J.L., the PDP group.(eds.):
Parallel Distributed Processing. Vol. 1. No. 2. MIT Press, Cambridge, Massachusetts
(1987) 319{362.
7. Zadeh, L.A.: Fuzzy sets. Information and Control, Vol. 8 (1965) 338{354.
8. Dimitrov, V.: Use of fuzzy logic when dealing with social complexity. Complexity
International, Vol. 4 (1997).
9. Gupta, M.M., Rao, D. H.: On the principles of fuzzy neural networks. Fuzzy Sets
and Systems. vol. 61, no.1 (1994) 1{18.
10. Brasil, L. M., Azevedo, F.M., R. O., Garcia, Barreto, J.M.: Cooperation of symbolic
and connectionist expert system techniques to overcome diculties. in Proc. II
Neural Networks Brazilian Congress. Curitiba, Brazil (1995) 177{182.
11. Brasil, L. M., Azevedo, F.M., R. O., Garcia, Barreto, J.M.: A Methodology for implementing hybrid expert systems. Proc. The IEEE Mediteranean Electrotechnical
Conference, MELECON'96. Bary, Italy (1996) 661{664.
12. Brasil, L. M., Azevedo, F.M., Barreto, J.M.: Uma arquitetura hbrida para sistemas
especialistas. Proc. III Neural Networks Brazilian Congress. Florianopolis, Brazil
(1997) 167{172.
13. Brasil, L. M., Azevedo, F.M., Barreto, J.M.: Uma Arquitetura para Sistema NeuroFuzzy-Ga. Proc. of The III Congreso Chileno de Ingeniera Electrica, Universidad
de La Frontera. Temuco, Chile (1997) 712{717.
14. Brasil, L. M., Azevedo, F.M., Barreto, J.M.: A hybrid expert architecture for medical diagnosis. Proc. 3th International Conference on Articial Neural Networks
and Genetic Algorithms, ICANNGA'97. Norwich, England (1997) 176{180.
15. Brasil, L. M., Azevedo, F.M., Barreto, J.M.: Learning algorithm for connectionist
systems. Proc. XII Congreso Chileno de Ingeniera Electrica, Universidad de La
Frontera. Temuco, Chile (1997) 697{702.
16. Brasil, L. M., Azevedo, F.M., Barreto, J.M., Noirhomme-Fraiture, M.: Training
algorithm for neuro-fuzzy-ga systems. Proc. 16th IASTED International Conference
on Applied Informatics, AI'98. Garmisch- Partenkirchen, Germany, February (1998)
697{702 (In Press).
17. Fogel, D.B.: Evolutionary Computation: Toward a New Philosophy of Machine
Intelligence. IEEE Press, New York, USA (1995).
18. Azevedo, F.M. de: Contribution to the Study of Neural Networks in Dynamical
Expert Systems. Ph.D. Thesis, Institut d'Informatique, FUNDP, Belgium (1993).
19. Mitra, S., Pal, S.K.: Logical operation based fuzzy MLP for classication and rule
generation. Neural Networks. Vol. 7, No. 2 (1994) 25{29.
20. Zhang, X., Hang, C., Tan, S., Wang, P.Z.: The min-max function dierentiation
and training of fuzzy neural networks. IEEE Transactions on neural networks. No. 5
(1994) 1139{1150.
Download

Complexity and Cognitive Computing