Mathematicss may be defined as building game taking to a large set of self-coherent rational entities: they do non hold any existance outside of our caput ( no herd of “Twos” in the forests ) . This pure rational building is chiefly made by unusual worlds ( called mathematicians ) with no attention of applications ( except some exclusions ) . From this rational building. other people ( incredible but true ) pick some maths entities and a priori decide to fit them with some existent universe observations. These unusual sort of people are called doctors. chemists. … and applied maths applied scientists. We show in the following figure the conceptual links between several maths-based human activities that lead together to what is generaly called a ‘mathematical model’ :

Niobium: Human being is the cardinal component of chief points in this strategy: – Detecting a portion of the Real World through a finite figure of detectors with finite declaration and scope is a human activity: what to detect. utilizing which detectors. why. … are inquiries that find replies in a priori cognition and belief of worlds. For ‘the same Real World’ . the pick of different experiments and detectors may take to different observations ( and so to different mathematics/observations fiting ) . – Constructing mathematics as a self coherent set of entities ( what we could name ‘pure maths’ ) . discoursing about what “self coherent” means. about what “demonstrated” . or “exist” agencies. … is a human rational activivity: ex: is it possible to make ex nihilo an wholly coherent system without a priori? … is a inquiry that led to specify the “axiom” impression ( californium. the maxim of pick ) that is the mathematical word for a priori cognition and belief.

There's a specialist from your university waiting to help you with that essay.
Tell us what you need to have done now!


order now

– Choosing to suit observations into pure maths entities. and so utilize heritage of their belongingss and their ability to unite in order to construct new entities. is a human activity utilizing a priori cognition and belief: ex: ‘space’ and ‘time’ are non fit into the same mathematical entities in Newton or in Einstein Physics … does that intend that ‘space’ and ‘time’ belongingss have changed between 1665 and 1920? One can detect that experiments and observations techniques made large advancement between those two day of the months! and acquiring new observations of fluctuations gave new ‘ideas’ of fiting … and led to new mathematical theoretical accounts.

Once every homo activity has been done. so. we get Mathematical MODELS that are normally described in Universities as wholly self-coherent subjects with no human intercession ( and that is true: human intercession was to make them. Once theoretical account created. so heritage allows to speak about ascertained entities with a vocabulary derived from pure maths. utilizing formal combination operations … ) . But it is of import non to bury that: – observations of fluctuations ARE NOT the existent universe

– theoretical accounts ARE NOT the existent universe AT ALL
– theoretical accounts ARE NOT pure maths

Some facts that show their difference:

– observations give a representation of the existent universe. compatible with our senses ( and chiefly vision: we are can manage a 1. 2. or 3 D representation of an observation. non more! ) in a finite scope of preciseness. bandwith. … . outside of this scope. no 1 “knows” what’s traveling on. – mathematical theoretical accounts INTRINSICALLY produce ERRORS ( of prediction/estimation … ) : one time mistake is lower than a given value. one can state that the MODEL IS “TRUE” ( a really different definition of truth than in the pure maths universe! ) . Niobium: Even if the mistake seems to be void … 1 should NEVER see that the theoretical account is “perfect” because:

– measurings have a finite preciseness ( so what a “null error” agencies? ) – in pattern. there ever stays “small” unexplained fluctuations called “noise” – comparing between anticipation of the theoretical account and observations was made ONLY in a finite figure of instances – experiment modifies the portion of the existent universe that one tries to “observe” … Observed fluctuations are images of interactions between human existences and the “part of the world” …

Niobium: the above diagram besides shows that engineering developments may take to theoretical developments. although it is the opposite that is ever presented as a feedforward cause to consequence nexus in Universities! ( theoretical work is supposed to convey engineering development ) .

WHAT A Mathematical MODEL IS
One could state that mathematical theoretical accounts ( besides called “applied maths” theoretical accounts ) are nil more than an intellectuel representation of a set of observations. This rational representation has:
– finite scope of application
– finite preciseness ( mistake IS a feature of the representation ) And the representation may alter:
– from a scope of application to another.
– in clip ( nil lasts everlastingly … ) .




WHAT THE Hell CAN I DO WITH SUCH A MATHEMATICAL MODEL?
There are 2 chief ways of utilizing mathematical theoretical accounts:

1 – presuming that mistake is “small enough” to be considered as nothing: In such a instance. theoretical accounts are by and large used for:

– prediction: it is the first purpose of a mathematical theoretical account: if I throw my pointer like this … will it hit the animate being? If one construct this machine like that … will it let to make this? … – “understanding” : if a theoretical account has “parameters” that may be tuned in order to accommodate its end product to observations ( it means in order to acquire a quasi-null mistake on a set of observations ) . so sometimes. the parametric quantities values are used as “descriptors” of the existent universe. For that. these parametric quantities must hold an intrinsic significance for the human being that applies the theoretical account. NB: “understanding” the existent universe IS NOT possible through the usage of applied maths. as shown on the above diagram … one can merely understand the “state” of our rational representation of the existent universe!

2 – hypothesis examiner: sensing and categorization

In such a instance. the mistake is non supposed as a quasi-null value: mistake is an image of the “distance” between the existent universe and our rational representation of the existent universe. Then this attack is frequently used in “detection” applications ( defects sensing. rare facts sensing. … ) . or categorization ( running several theoretical accounts in the same clip allows to acquire several mistakes. each mistake matching to a given hypothesis ; the smallest mistake corresponds to the most plausible known hypothesis ) .

AND IF MY SYSTEM IS COMPOSED OF MANY SUB-SYSTEMS?

When the portion of the existent universe ( besides called “system” ) is “big” ( illustration: a auto ) . enticement is to cut it into sub-systems ( illustration: wheels. tyres. springs. suspension. … ) and to construct a theoretical account for every of them. This is the most seen option in the industry. Unfortunately. this manner of edifice applied maths theoretical accounts has 2 disadvantages: – mistakes ( of sub-models ) may conglomerate … ( and believe us … they frequently do! ) . and in the terminal. the most elaborate the theoretical account. the less usefull! – the film editing of a system into sub-systems is frequently engineering sensitive: illustration for a auto: the mecanical tip can be decomposed into a few mecanical organic structures

… But in instance of a tip by wire ( electronic tip ) . it is nonsensical to maintain the same decomposition ( electronic bomber systems don’t mime the functionalities of mecanical organic structures of the mecanical tip ) . It means the a FUNCTIONAL ANALYSIS must be done BEFORE constructing the theoretical account: subsystems must non be seenable organic structures. they must be “sub-functions” ( that are non supposed to be engineering sensitive ) .

And in pattern … “global imprecise models” frequently lead to better consequences than elaborate precise 1s ( because the more a sub-model is precise. the more it is sensitive to the mistake generated by the upstream sub-model … ) . In any instance. one can see that preciseness of upstream bomber theoretical accounts MUST be much better that preciseness of downstream bomber theoretical accounts … in order to acquire a robust theoretical account. If this is non the instance. it is non possible to stop up theoretical accounts on theoretical accounts … without constructing a theoretical account of the interface ( covering with preciseness affairs ) : a theoretical account of the connection between two theoretical accounts … ( halt! )

CAN I Construct A MODEL FOR “ANYTHING” ?
The reply is NO.
Lashkar-e-taibas us consider the hoarding game:

The black line is the coveted flight. The ruddy 1 is the existent flight: angle and velocity can non be initialized with an infinite preciseness. This is called the mistake on initial conditions. One can see on the above diagram that this initial mistake grows in a regular manner with clip: for “every” clip. it is possible to cognize in wich range the mistake is. One says that such a system can be modelized. Now. allow us see precisely the same game. but with obstructions:

One can see that even for a really little initial mistake. flights may be wholly different after a certain clip: at the beginning. the system is predictable … but all of a sudden a BIFURCATION occurs and the mistake goes out of bounds. This sensitiveness to initial conditions leads to the definition of helter-skelter systems: when sensitiveness to initial conditions is bigger than preciseness of actuators and measurings … the system is theoretically unpredictable in the long term ( although it stays wholly predictable in the short term! ) . Constructing a theoretical account of the bilboard with obstructions wouldn’t let to acquire long term prediction!

Several sorts of theoretical accounts

The God Knowledge Model

The first interesting theoretical account to depict is besides the simplest to explicate ( although it is impossible to use in pattern ) : this is the “God Knowledge Model” ( GKM ) . This theoretical account is nil else than a elephantine ( infinite ) informations base with ALL the possible instances recorded. In order to acquire the end product of the theoretical account. one needs the input vector: this input vector has to be found into the information base. Once found. one merely has to Read the end product.

There is no computer science.

Of class. the figure of instances is by and large infinite ( and even non denumerable ) and this theoretical account can non be used … But let’s maintain it in head! Features: infinite figure of informations points. no computer science.

The Local Computing Memory

The thought that comes following to the GOD KNOWLEDGE theoretical account is the LOCAL COMPUTING Memory: this solution consists in entering “almost every possible observation” into a large information base. Then when a partial observation occurs. it is possible to seek for the closest record in this information base ( let’s notice that this impression of intimacy between sets of steps request that a topology and so a distance have to be defined before ) . When the 2 or 3 closest instances are found. so it is possible to Calculate the end product of the new entry ( ex: a ballot process for a form recognition/classification system. an interpolation/extrapolation process for a quantitative mold system ) . One can see that it is possible to see the God Knowledge Model as a bound of the Local Computing Memory when the figure of recorded instances tends to ALL THE CASES. Computer sciences are a local process ( it applies merely between the few elements selected because they are really “close” to the new entry ) . Features: low calculating power. large memory.

The Equational Model

The equational theoretical account is a set of mathematical equations: illustration: Y = ax2 + bx + degree Celsius ; in this illustration. y can be considered as an end product variable. x as an input variable. and a. b. degree Celsius as parametric quantities. The equations are normally given by a “theory” ( a set of a priori matching between observations and maths entities that were shown to be interesting and that is tought. for case. in universities ) or they may ensue from YOUR experiments. The parametric quantities have to be tuned in order to do anticipations suit into measurings: 1 must happen a “good set of parameters” . In the general instance. there is no UNIQUE set of parametric quantities for a given consequence.

There are chiefly two ways of happening such a parametric quantities set: – a priori: parametric quantities must hold so an intrinsic significance for an expert that uses a theory affecting these parametric quantities ( natural philosophies. … ) . – from informations: parametric quantities are automatically tuned in order to maximise the fittingness of the theoretical account ( compared to existent informations ) : maximising the fittingness normally means minimising the mistake of the theoretical account.

This 2nd manner of happening a good set of parametric quantities doesn’t necessitate them to hold an “intrinsic meaning” . The hunt for a good set of parametric quantities that will take the theoretical account to suit into observations is frequently called “process identification” .

Niobium: there may be several equational systems for several scopes of fluctuation ( plus a switch ) . Ifever “every” instance needs a new set of equations. so it means that equations are non needed: one merely necessitate to enter the end product for a given input. and the system becomes a God Knowledge System. On the antonym. if ONE system of equations can be used. whatever input scope. so one call it a General Equational Model. In the instance of apprehensible parametric quantities. the theoretical account is said to be a “knowledge based general equational model” . Features: high calculating power. low memory.

Niobium: because theoretical account and parametric quantities are chosen in order to do anticipation tantrum into observations on a FINITE set of illustrations. the General Equational Model doesn’t exist in pattern ( it has LIMITS OF APPLICABILITY ) . It is really of import that the user is cognizant of these bounds …

Equational theoretical accounts that show a “meaning” through their parametric quantities Examples: U = U0. e-t/?
Parameters are U0 and ? .
Input is t.
End product is U.
Meaning of parametric quantities: U0 is the initial value. and T is the inactiveness ( intersection between U = 0 and the tendency with incline for T = 0 ) :



Equational theoretical accounts that show a significance through their parametric quantities are frequently called “white boxes” .

Equational theoretical accounts that show “no meaning” through their parametric quantities

Example 1: the alleged provender frontward Neural Networks ( see NEURAL NETS )

Let us see M1 = synaptic weights matrix n°1. M2 = synaptic weights matrix n°2. so: Si = Thursday ( ?M2ij. ( Thursday ( ?M1jk. ek ) ) )

This sort of equations. under certain conditions. are cosmopolitan approximators ( see HERE ) . and they are used for patterning systems from informations. Parameters are the synaptic weights ( values of matrix 1 and matrix 2 ) and they by and large do non hold any intrinsic significance for the user of such a theoretical account. That is why they are frequently called “black boxes” .

Example 2: sometimes. even really simple equational theoretical account don’t show any significance through their parametric quantities Let us see the additive arrested development theoretical account: Y = ?aiXi + error The parametric quantities are the ai coefficients. They are computed by the additive arrested development algorithm in order to suit the theoretical account into observations. Because the theoretical account is really simple. coefficients are supposed to hold a significance for the user … ( ex: a sort of “weight” or “importance” … ) . but we show below an illustration ( Excel simulation: everyone can seek on his/her computing machine ) : – We build a set of informations:

V1 = alea ( ) ; V2 = 0. 1. V1 + ALEA ( ) ; V3 = 0. 5. V1 + 0. 5. V2 ; V4 = 0. 3. V3 + 0. 3. V2 + 0. 3. V1 – ALEA ( ) /3 ; V5 = ALEA ( ) /10 + 0. 25. V4+0. 25. V3+0. 25. V2+0. 25. V1 ; V6 = ALEA ( ) -0. 1. V1 – 0. 2. V2 + 0. 6. V5 ; V7 = V6+V5+V2 – V1 And Y = 0. 05. V1 + 0. 05. V2 + 0. 05. V3 + 0. 05. V4 + 0. 05. V5 + 0. 7. V6 + 0. 05. V7

– Every clip that we click the F9 key. new random values ALEA ( ) are given. and the additive arrested development algorithm of Excel is applied. This additive arrested development algorithms leads to two interesting set of consequences: – the set of ai parametric quantities of the theoretical account: this set of parametric quantities can be compared to the parametric quantities really used to construct Y. – the correlativity of Reconstruction ( expected: 100 % ) = fittingness of the theoretical account We give the correlativity matrix for every F9 chink. in order to allow statisticians believe about it ?? consequences:

– chink F9 n°1:

– chink F9 n°2:

– etc …

decision: Procedure designation with a additive arrested development ever lead. in our instance. to a “PERFECT” theoretical account ( correlativity = 1 ) . However. even if the theoretical account is “perfect” in footings of correlativity … and even if this theoretical account is really simple … its parametric quantities have NO significance! ( although reading of arrested development coefficients as “importance measurement” is a method that many universities still apply and even print in “scientifical” documents … Unfortunately. this is non possible unless certain conditions of independance of input variables … that are RARELY verified in pattern! ) : the “perfect theory applied to instances where it shouldn’t apply … leads to the “perfect publishable nonsense” !

The rules-based theoretical account

Sometimes. cognition and belief about ascertained fluctuations are non “recorded” neither as informations nor as equations: so. they besides can be recorded as a set of “rules” : illustrations: “if this happens. effect will be xxxx” “the more the force per unit area turning. the more the temperature turning. the less the volume …” Rules are a set of logical entities that describe the fluctuations in a Qualitative manner ( some say in a symbolic universe ) . In order to let a quantitative rating of the “model’s output” . there are several approches: the most known are: – case-based reasonning. that proposes to use the closest recorded instance ( so one can see its intimacy to local calculating memories ) .

– Bayesian probabilties systems: knowing that A and B have a chance of P ( A ) and P ( B ) . what is the chance P ( C ) for C? – fuzzed logic. that describes regulations with numerical representation among quantitative variables ( see FUZZY LOGIC ) . allows really easy to transform numerical informations into constructs. use logic on constructs entities. and change over back the decision of logical processing into a numerical consequence. Advantage of rules-based systems is that their behavior is apprehensible in natural linguistic communication. Disadvantage is that regulations are a less compact manner of entering cognition and belief than equations. Features: norm calculating power. mean memory.

Decision

Constructing applied maths theoretical accounts is a work for experts …
The usage of a software’s user friendly interface may bring forth many Numberss and simulations that have no significance at all … although they bring the “illusion” of truth!

The chief points are:
– focal point on the “good” degree of inside informations ( utilize a functional analysis … ) . – return into history the mistake of theoretical accounts as an intrinsic belongingss of applied maths theoretical accounts ( if you wish to acquire robust theoretical accounts ) . – choose an applicable sort of theoretical account ( if you need informations for procedure designation … make certain that informations are available … ) . – don’t attempt to construct a long term prediction theoretical account for a helter-skelter system. – verify in “reality” hypothesis of pertinence.

– give bounds of the theoretical account cogency sphere ( ranges … ) . it will avoid bizarre extrapolations! … – beware. if you try in give a significance to parametric quantities. it’s non that simple: if equations come from a cognition. so it might be possible. but even in such a instance. one must verify a few conditions.

Leave a Reply

Your email address will not be published. Required fields are marked *