The paradigm of complex probability and Claude Shannon’s information theory

ABSTRACT Andrey Kolmogorov put forward in 1933 the five fundamental axioms of classical probability theory. The original idea in my complex probability paradigm is to add new imaginary dimensions to the experiment real dimensions which will make the work in the complex probability set totally predictable and with a probability permanently equal to one. Therefore, adding to the real set of probabilities the contributions of the imaginary set of probabilities will make the event in absolutely deterministic. It is of great importance that stochastic systems become totally predictable since we will be perfectly knowledgeable to foretell the outcome of all random events that occur in nature. Hence, my purpose here is to link my complex probability paradigm to Claude Shannon’s information theory that was originally proposed in 1948. Consequently, by calculating the parameters of the new prognostic model, we will be able to determine the magnitude of the chaotic factor, the degree of our knowledge, the complex probability, the self-information functions, the message entropies, and the channel capacities in the probability sets and and and which are all functions of the message real probability subject to chaos and random effects.

Complex set; complex probability; probability norm; degree of our knowledge; chaotic factor; self-information; message entropy; channel capacity Nomenclature Real probability set of events Imaginary probability set of events Complex probability set of events i The imaginary number where i = √ −1 EKA Extended Kolmogorov's Axioms CPP Complex Probability Paradigm P rob Probability of any event P r Probability in the real set = message real probability P m Probability in the imaginary set corresponding to the real probability in = message complementary probability in P m /i Message complementary probability in Pc Probability of an event in with its associated event in , it is the message probability in the complex set Z Complex probability number and vector, it is the sum of P r and P m DOK = |Z| 2 = Degree of Our Knowledge of the random message, it is the square of the norm of Z. Chf Chaotic Factor of the random message MChf Magnitude of the Chaotic Factor of the random message I 2 Surprisal self-information in base 2 RI 2 Rescaled surprisal in base 2 CONTACT Abdo Abou Jaoude abdoaj@idm.net.lbĪ

Introduction
Firstly, information theory studies the quantification, storage, and communication of information. It was originally proposed by Claude Elwood Shannon in 1948 to find fundamental limits on signal processing and communication operations such as data compression, in a landmark paper entitled 'A Mathematical Theory of Communication'. Now this theory has found applications in many other areas, including statistical inference, natural language processing, cryptography, neurobiology (Rieke, Warland, van Steveninck, & Bialek, 1997), the evolution (Huelsenbeck, Ronquist, Nielsen, & Bollback, 2001) and function of molecular codes (Allikmets et al., 1998), model selection in ecology (Burnham & Anderson, 2002), thermal physics (Jaynes, 1957), quantum computing, linguistics, plagiarism detection (Bennett, Li, & Ma, 2003), pattern recognition, and anomaly detection (David & Anderson, 2003).
A key measure in information theory is 'entropy'. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (with two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a die (with six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy (Fazlollah, 1994(Fazlollah, [1961; Ash, 1990Ash, [1965; Gibson, 1998;Shannon, 1948;Hartley, 1928).
Moreover, the field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, and electrical engineering. Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields. Important subfields of information theory include source coding, channel coding, algorithmic complexity theory, algorithmic information theory, information-theoretic security, and measures of information. (Arndt, 2004;Ash, 1990;Gallager, 1968;Landauer, 1961;Timme, Alford, Flecker, & Beggs, 2012).
Furthermore, information theory studies the transmission, processing, utilisation, and extraction of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was made concrete in 1948 by Claude Shannon in his paper 'A Mathematical Theory of Communication', in which 'information' is thought of as a set of possible messages, where the goal is to send these messages over a noisy channel, and then to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent (Rieke et al., 1997;Cover & Thomas, 2006;Csiszar & Korner, 1997;Goldman, 1968;MacKay, 2003;Mansuripur, 1987).
Information theory is closely associated with a collection of pure and applied disciplines that have been investigated and reduced to engineering practice under a variety of rubrics throughout the world over the past half century or more: adaptive systems, anticipatory systems, artificial intelligence, complex systems, complexity science, cybernetics, informatics, machine learning, along with systems sciences of many descriptions. Information theory is a broad and deep mathematical theory, with equally broad and deep applications, amongst which is the vital field of coding theory (McEliece, 2002;Pierce, 1961;Reza, 1961).
Additionally, coding theory is concerned with finding explicit methods, called codes, for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible. A third class of information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis. Information theory is also used in information retrieval, intelligence gathering, gambling, statistics, and even in musical composition (Shannon & Weaver, 1949;Stone, 2014;Yeung, 2002).
The landmark event that established the discipline of information theory and brought it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper 'A Mathematical Theory of Communication' in the Bell System Technical Journal in July and October 1948. Prior to this paper, limited informationtheoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying 'intelligence' and the 'line speed' at which it can be transmitted by a communication system, giving the relation W = K log m (recalling Ludwig Boltzmann's constant), where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. Ralph Hartley's 1928 paper, Transmission of Information, uses the word information as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as: where S was the number of possible symbols, and n the number of symbols in a transmission. The unit of information was therefore the decimal digit, much later renamed the hartley in his honour as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers (Brillouin, 1962(Brillouin, [2004; Gleick, 2011;Yeung, 2008).
Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and Josiah Willard Gibbs. Connections between informationtheoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and information theory (Khinchin, 1957;Leff & Rex, 1990;Logan, 2014).
In addition, in Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion that: "The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point." With it came the ideas of: • the information entropy and redundancy of a source, and its relevance through the source coding theorem; • the mutual information, and the channel capacity of a noisy channel, including the promise of perfect lossfree communication given by the noisy-channel coding theorem; • the practical result of the Shannon-Hartley law for the channel capacity of a Gaussian channel; as well as • the bit-a new way of seeing the most fundamental unit of information.
Also, some applications of information theory to other fields are:

Intelligence uses and secrecy applications
Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the ban, was used in the Ultra project, breaking the German Enigma machine code and hastening the end of World War II in Europe. Shannon himself defined an important concept now called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability. Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers. The security of all such methods currently comes from the assumption that no known attack can break them in a practical amount of time. Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material (Campbell, 1982;Seife, 2006;Siegfried, 2000).

Pseudorandom number generation
Pseudorandom number generators are widely available in computer language libraries and application programmes. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators, but even they require random seeds external to the software to work as intended. These can be obtained via extractors, if done carefully. The measure of sufficient randomness in extractors is min-entropy, a value related to Shannon entropy through Rényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses (Escolano, Francisco, & Pablo, 2009;Theil, 1967).

Seismic exploration
One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods (Haggerty, 1981).

Semiotics
Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and Ferruccio Rossi-Landi to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones (Noth, 1981).

Miscellaneous applications
Information theory also has applications in gambling and investing, black holes, and bioinformatics. (Wikipedia, the free encyclopedia, Information theory).
Finally, and to conclude, this research paper is organised as follows: After the introduction in section I, the purpose and the advantages of the present work are presented in section II. Afterward, in section III, the extended Kolmogorov's axioms and hence the complex probability paradigm with their original parameters and interpretation will be explained and illustrated. In section IV, Shannon's information theory is summarised and reviewed. Moreover, in section V, the surprisal and expectancy self-information functions are defined. In section VI the complex probability paradigm axioms are applied to the concept of binary entropy which will be extended to the imaginary and complex sets. Additionally, in section VII, the BSC capacity is also extended to the sets and . Also, in section VIII, all the CPP parameters new model are presented. In section IX, the flowchart of this current study is shown. Furthermore, the simulations of the novel model for various discrete and continuous probability distributions are illustrated in section X. In section XI, a final analysis will be done. Finally, we conclude the work by doing a comprehensive summary in section XII, and then present the list of references cited in the current research work.

The purpose and the advantages of the present work
All our work in classical probability theory is to compute probabilities. The original idea in this paper is to add new dimensions to our random experiment which will make the work totally deterministic. In fact, probability theory is a nondeterministic theory by nature that means that the outcome of the stochastic events is due to chance and luck. By adding new dimensions to the event occurring in the 'real' laboratory which is , we make the work deterministic and hence a random experiment will have a certain outcome in the complex set of probabilities . It is of great importance that stochastic systems become totally predictable since we will be perfectly knowledgeable to foretell the outcome of all chaotic and random events that occur in nature like for example in statistical mechanics, in all stochastic processes, or in the well-established field of information theory. Therefore, the work that should be done is to add to the real set of probabilities , the contributions of which is the imaginary set of probabilities that will make the event in = + absolutely deterministic. If this is found to be fruitful, then a new theory in stochastic sciences would be elaborated and this to understand deterministically those phenomena that used to be random phenomena in . This is what I called 'The Complex Probability Paradigm' that was initiated and elaborated in my nine previous papers (Abou Jaoude, 2013aJaoude, , 2013bAbou Jaoude, 2014;Abou Jaoude, 2015a, 2015bAbou Jaoude, El-Tawil, & Kadry, 2010;Abou Jaoude, 2016a, 2016bAbou Jaoude, 2017).
Moreover, information theory laws firstly introduced by Claude Shannon are very well known and established. An updated follow-up of the message behaviour with time which is subject to chaotic and non-chaotic effects is done by the message probability due to its definition that evaluates the flips chances in the transmitted and received message.
Furthermore, my purpose in this current work is to link the complex probability paradigm to Claude Shannon's information theory. In fact, the system message probability derived from information theory will be included in and applied to the complex probability paradigm. This will lead to the novel and original prognostic model illustrated in this paper. Hence, by calculating the parameters of the new prognostic model, we will be able to determine the magnitude of the chaotic factor, the degree of our knowledge, the complex probability, the message surprisal and expectancy self-information functions, the message entropies, and the channel capacities in the probability sets and and and which are all functions of the message real probability subject to chaos and random effects. Consequently, to summarise, the objectives and the advantages of the present work are to: (1) Extend classical probability theory to the set of complex numbers, hence to relate probability theory to the field of complex analysis in mathematics. This task was initiated and elaborated in my nine previous papers.
(2) Do an updated follow-up of the system behaviour with time which is subject to chaos. This follow-up is accomplished by the message real, imaginary, and complex probabilities due to their definitions that evaluate the flips chances in the transmitted and received message in , , and , and hence to relate probability theory to information theory in an original and a new way.
(3) Apply the new probability axioms and paradigm to information theory; thus, I will extend the concepts of information theory to the complex probability set . (4) Prove that any random and stochastic phenomenon can be expressed deterministically in the complex set . (5) Quantify both the degree of our knowledge and the chaos magnitude of the random message and channel. (6) Draw and represent graphically the functions and parameters of the novel paradigm associated to a random message and channel. (7) Show that the classical concept of the message entropy is always equal to 0 in the complex set; hence, no chaos, no disorder, no unpredictability, and no ignorance exist in (complex set) = (real set) + (imaginary set). (8) Prove that by adding supplementary and new dimensions to any random experiment whether it is a random channel or message or any other stochastic system we will be able to do prognostic in a deterministic way in the complex set . (9) Pave the way to apply the original paradigm to other topics in statistical mechanics, in stochastic processes, and to the theory of information. These will be the subjects of my subsequent research papers.
To conclude and to summarise, compared with existing literature, the main contribution of this research paper is to apply my original complex probability paradigm to the concepts of self-information, of random entropy, and of channel capacity thus to Claude Shannon's information theory. We emphasise that it is the first time that we link the complex probability paradigm to Claude Shannon's information theory; hence, the motivation of the current work will be to extend the classical information model to the complex set of numbers by adding supplementary imaginary dimensions to the information system. Consequently, all the Shannon's random quantities will be expressed deterministically in the novel paradigm. The following figure summarises the objectives of the current research paper (Figure 1).

The extended set of probability axioms
In this section, the extended set of probability axioms of the complex probability paradigm will be presented.

The original Andrey Nikolaevich Kolmogorov set of axioms
The simplicity of Kolmogorov's system of axioms may be surprising. Let E be a collection of elements {E 1 , E 2 , . . .} called elementary events and let F be a set of subsets of E called random events. The five axioms for a finite set E are (Benton, 1966a(Benton, , 1966bFeller, 1968;Montgomery & Runger, 2003;Walpole, Myers, Myers, & Ye, 2002): Axiom 1: F is a field of sets. Axiom 2: F contains the set E. Axiom 3: A non-negative real number P rob (A), called the probability of A, is assigned to each set A in F. We have always 0 ≤ P rob (A) ≤ 1. Axiom 4: P rob (E) equals 1. Axiom 5: If A and B have no elements in common, the number assigned to their union is: hence, we say that A and B are disjoint; otherwise, we have: And we say also that: which is the conditional probability. If both A and B are independent then: Moreover, we can generalise and say that for N disjoint (mutually exclusive) events A 1 , A 2 , . . . , A j , . . . , A N (for 1 ≤ j ≤ N), we have the following additivity rule: And we say also that for N independent events A 1 , A 2 , . . . , A j , . . . , A N (for 1 ≤ j ≤ N), we have the following product rule:

Adding the imaginary part
Now, we can add to this system of axioms an imaginary part such that: Axiom 6: Let P m = i × (1 − P r ) be the probability of an associated event in (the imaginary part) to the event A in (the real part). It follows that P r + P m /i = 1 where i is the imaginary number with i = √ −1. Axiom 7: We construct the complex number or vector Z = P r + P m = P r + i(1 − P r ) having a norm |Z| such that: Axiom 8: Let Pc denote the probability of an event in the complex probability universe where = + . We say that Pc is the probability of an event A in with its associated event in such that: We can see that the system of axioms defined by Kolmogorov could be hence expanded to take into consideration the set of imaginary probabilities by adding three new axioms (Abou Jaoude, 2013a, 2013bAbou Jaoude, 2014;Abou Jaoude, 2015a, 2015bAbou Jaoude et al., 2010;Abou Jaoude, 2016a, 2016bAbou Jaoude, 2017).

The purpose of extending the axioms
It is apparent from the set of axioms that the addition of an imaginary part to the real event makes the probability of the event in always equal to 1. In fact, if we begin to see the set of probabilities as divided into two parts, one is real and the other is imaginary, then understanding will follow directly. The random event that occurs in the real probability set (like tossing a coin and getting a head), has a corresponding probability P r . Now, let be the set of imaginary probabilities and let |Z| 2 be the Degree of Our Knowledge (DOK for short) of this phenomenon. P r is always, and according to Kolmogorov's axioms, the probability of an event.
A total ignorance of the set makes: and |Z| 2 in this case is equal to: Conversely, a total knowledge of the set in makes: P rob (event) = P r = 1 and P m = P rob (imaginary part) = 0.
Here we have |Z| 2 = 1 − (2 × 1) × (1 − 1) = 1 because the phenomenon is totally known, that is, its laws and variables are completely determined, hence; our degree of our knowledge of the system is 1 = 100%. Now, if we can tell for sure that an event will never occur i.e. like 'getting nothing' (the empty set), P r is accordingly = 0, that is the event will never occur in . P m will be equal to: because we can tell that the event of getting nothing surely will never occur; thus, the Degree of Our Knowledge (DOK) of the system is 1 = 100%. (Abou Jaoude et al., 2010).
We can infer that we have always: 0.5 ≤ |Z| 2 ≤ 1, ∀ P r : 0 ≤ P r ≤ 1 and And what is important is that in all cases we have: In fact, according to an experimenter in , the game is a game of chance: the experimenter doesn't know the output of the event. He will assign to each outcome a probability P r and he will say that the output is nondeterministic. But in the universe = + , an observer will be able to predict the outcome of the game of chance since he takes into consideration the contribution of , so we write: Hence Pc is always equal to 1. In fact, the addition of the imaginary set to our random experiment resulted to the abolition of ignorance and indeterminism. Consequently, the study of this class of phenomena in is of great usefulness since we will be able to predict with certainty the outcome of experiments conducted. In fact, the study in leads to unpredictability and uncertainty. So instead of placing ourselves in , we place ourselves in then study the phenomena, because in the contributions of are taken into consideration and therefore a deterministic study of the phenomena becomes possible. Conversely, by taking into consideration the contribution of the set we place ourselves in and by ignoring we restrict our study to nondeterministic phenomena in (Bell, 1992;Boursin, 1986;Dacunha-Castelle, 1996;Dalmedico-Dahan & Peiffer, 1986;Dalmedico-Dahan, Chabert, & Chemla, 1992;Ekeland, 1991;Franklin, 2001;Freund, 1973;Gleick, 1997;Gullberg, 1997;Science Et Vie, 1999;Srinivasan & Mehata, 1988;Stewart, 2002;Van Kampen, 2006; Wikipedia, the free encyclopedia, Probability; Kuhn, 1970;Warusfel & Ducrocq, 2004; Wikipedia, the free encyclopedia, Probability theory; Wikipedia, the free encyclopedia, Probability distribution; Abrams, 2008;Barrow, 1992;Daston, 1988;David, 1962;Gorrochum, 2012;Greene, 2003;Hacking, 2006;Jeffrey, 1992;Poincaré, 1968;Stewart, 1996;Stewart, 2012;Von Plato, 1994).
Moreover, it follows from the above definitions and axioms that (Abou Jaoude et al., 2010): 2iP r P m will be called the Chaotic factor in our experiment and will be denoted accordingly by 'Chf '. We will see why we have called this term the chaotic factor; in fact: In case P r = 1, that is the case of a certain event, then the chaotic factor of the event is equal to 0.
In case P r = 0, that is the case of an impossible event, then Chf = 0. Hence, in both two last cases, there is no chaos since the outcome is certain and is known in advance.
What is interesting here is thus we have quantified both the degree of our knowledge and the chaotic factor of any random event and hence we write now: Then we can conclude that: Pc 2 = Degree of our knowledge of the system -Chaotic factor = 1, therefore Pc = 1 permanently.
The graph below shows the linear relation between both DOK and Chf ( Figure 5).
Furthermore, we need in our current study the absolute value of the chaotic factor that will give us the magnitude of the chaotic and random effects on the studied message materialised by the real flips chances P r and a probability density function, and which lead to an increasing system chaos in . This new term will be denoted accordingly MChf or Magnitude of the Chaotic factor (Abou Jaoude, 2015a, 2015b; Abou Jaoude, 2016a, 2016b;  Abou Jaoude, 2017). Hence, we can deduce the following: and The graph below ( Figure 6) shows the linear relation between both DOK and MChf. Moreover, Figures 7-13 show the graphs of Chf, MChf, DOK, and Pc as functions of the real probability P r for any probability distribution and for a gamma probability distribution.       To summarise and to conclude, as the degree of our certain knowledge in the real universe is unfortunately incomplete, the extension to the complex set includes the contributions of both the real set of probabilities and the imaginary set of probabilities . Consequently, this will result in a complete and perfect degree of knowledge in = + (since Pc = 1). In fact, in order to have a certain prediction of any random event, it is necessary to work in the complex set in which the chaotic factor is quantified and subtracted from the computed degree of knowledge to lead to a probability in equal to one short) or the Complex Probability Paradigm (CPP for short) can be illustrated by the following figure (Figure 14).

Quantities of information
Firstly, information theory is based on probability theory and statistics. Information theory often concerns itself with measures of information of the distributions associated with random variables. Important quantities of information are entropy, a measure of information in a single random variable, and mutual information, a measure of information in common between two random variables. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution (Rieke et al., 1997;Huelsenbeck et al., 2001;Allikmets et al., 1998;Burnham & Anderson, 2002;Jaynes, 1957;Bennett et al., 2003;David & Anderson, 2003).
The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. A common unit of information is the 'bit', based on the binary logarithm. Other units include the 'nat', which is based on the natural logarithm, and the 'hartley', which is based on the common logarithm (Fazlollah, 1994(Fazlollah, [1961; Ash, 1990Ash, [1965; Gibson, 1998;Shannon, 1948;Hartley, 1928).
In what follows, an expression of the form p log p is considered by convention to be equal to zero whenever p = 0. This is justified because lim p→0+ p log p = 0 − for any logarithmic base by L'Hôpital's rule.

Self-Information
Shannon derived a measure of information content called the self-information or 'suprisal' of a message x: where p(x) = P rob (X = x) is the probability that message x is chosen from all possible choices in the message space X. The base of the logarithm only affects a scaling factor and, consequently, the units in which the measured information content is expressed. If the logarithm is base 2, the measure of information is expressed in units of bits (Kelly Jr, 1956;Kolmogorov, 1968;Landauer, 1961;Landauer, 1993;Timme et al., 2012). Information is transferred from a source to a recipient only if the recipient of the information did not already have the information to begin with. Messages that convey information that is certain to happen and already known by the recipient contain no real information. Infrequently occurring messages contain more information than more frequently occurring messages. This fact is reflected in the above equation -a certain message, i.e. of probability 1, has an information measure of zero. In addition, a compound message of two (or more) unrelated (or mutually independent) messages would have a quantity of information that is the sum of the measures of information of each message individually. That fact is also reflected in the above equation, supporting the validity of its derivation (Arndt, 2004;Ash, 1990;Cover & Thomas, 2006;Gallager, 1968;Goldman, 1968).
An example: The weather forecast broadcast is: 'Tonight's forecast: Dark. Continued darkness until widely scattered light in the morning.' This message contains almost no information. However, a forecast of a snowstorm would certainly contain information since such does not happen every evening. There would be an even greater amount of information in an accurate forecast of snow for a warm location, such as Miami. The amount of information in a forecast of snow for a location where it never snows (impossible event) is the highest (infinity) (Csiszar & Korner, 1997;MacKay, 2003;Mansuripur, 1987). This measure has also been called surprisal, as it represents the 'surprise' of seeing the outcome (a highly improbable outcome is very surprising). This term was coined by Myron Tribus in his 1961 book Thermostatics and Thermodynamics (McEliece, 2002;Pierce, 1961;Reza, 1961).
The information entropy of a random event is the expected value of its self-information.

Entropy of an information source
where p i is the probability of occurrence of the i-th possible value of the source symbol. This equation gives the entropy in the units of 'bits' (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the 'shannon' in his honour. Entropy is also commonly computed using the natural logarithm (base e, where e is Leonhard Euler's number), which produces a measurement of entropy in 'nats' per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base 2 8 = 256 will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol. Intuitively, the entropy H X of a discrete random variable X is a measure of the amount of uncertainty associated with the value of X when only its distribution is known. The entropy of a source that emits a sequence of N symbols that are independent and identically distributed (iid) is N × H bits (per message of N symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length N will be less than N × H.
The entropy of a Bernoulli trial as a function of success probability, often called the binary entropy function H b (p). The entropy is maximised at 1 bit per trial when the two possible outcomes are equally probable, as in an unbiased coin toss ( Figure 15).
Suppose one transmits 1000 bits (0s and 1s). If the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If is the set of all messages {x 1 , x 2 . . . , x n } that X could be, and p(x) is the probability of some x ∈ , then the entropy, H, of X is defined: (Here, I(x) is the self-information, which is the entropy contribution of an individual message, and E X is the expected value.) A property of entropy is that it is maximised when all the messages in the message space are equiprobable p(x) = 1/n; i.e. most unpredictable, in which case H(X) = log n.
The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit:

Joint entropy
The joint entropy of two discrete random variables X and Y is merely the entropy of their pairing: (X, Y). This implies that if X and Y are independent, then their joint entropy is the sum of their individual entropies. For example, if (X, Y) represents the position of a chess piece -X the row and Y the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece.
(11) Despite similar notation, joint entropy should not be confused with cross entropy.

Conditional entropy (equivocation)
The conditional entropy or conditional uncertainty of X given random variable Y (also called the equivocation of X about Y) is the average conditional entropy over Y: Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that:

Mutual information (transinformation)
Mutual information measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximise the amount of information shared between sent and received signals. The mutual information of X relative to Y is given by: where SI (Specific mutual Information) is the pointwise mutual information.
A basic property of the mutual information is that That is, knowing Y, we can save an average of I(X; Y) bits in encoding X compared to not knowing Y. Mutual information is symmetric:

Kullback-Leibler divergence (information gain)
The Kullback-Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions: a 'true' probability distribution p(X), and an arbitrary probability distribution q(X).
If we compress data in a manner that assumes q(X) is the distribution underlying some data, when, in reality, p(X) is the correct distribution, the Kullback-Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric).

Differential entropy
Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Shannon to extend the idea of (Shannon) entropy, a measure of average surprisal of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Let X be a random variable with a probability density function f whose support is a set . The differential entropy h(X) or h(f ) is defined as As with its discrete analog, the units of differential entropy depend on the base of the logarithm, which is usually 2 (i.e. the units are bits). Related concepts such as joint, conditional differential entropy, and relative entropy are defined in a similar fashion.

Coding theory
Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source.
• Data compression (source coding): There are two formulations for the compression problem: lossless data compression: the data must be reconstructed exactly; lossy data compression: allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of Information theory is called rate-distortion theory. • Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error correcting code adds just the right kind of redundancy (i.e. error correction) needed to transmit the data efficiently and faithfully across a noisy channel.
This division of coding theory into compression and transmission is justified by the information transmission theorems, or source-channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary 'helpers' (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Network information theory refers to these multi-agent communication models.

Source theory
Any process that generates successive messages can be considered a source of information. A memoryless source is one in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory.

Rate.
Information rate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the average rate is that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result.
It is common in information theory to speak of the 'rate' or 'entropy' of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of source coding.

Channel capacity
Communications over a channel-such as an ethernet cable-is the primary motivation of information theory. As anyone who's ever used a telephone (mobile or landline) knows, however, such channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality. How much information can one hope to communicate over a noisy (or otherwise imperfect) channel?
Consider the communications process over a discrete channel. A simple model of the process is shown below (Figure 16): Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let p(y|x) be the conditional probability distribution function of Y given X. We will consider p(y|x) to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of f (x), the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximise the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the channel capacity and is given by: This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error. Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity.

Continuous-time analog communications channel subject to Gaussian noise. The Shannon-Hartley
theorem states the channel capacity C, meaning the theoretical tightest upper bound on the information rate of data that can be communicated at an arbitrarily low error rate using an average received signal power S through an analog communication channel subject to additive white Gaussian noise of power N: where C is the channel capacity in bits per second, a theoretical upper bound on the net bit rate (information rate, sometimes denoted I) excluding errorcorrection codes; B is the bandwidth of the channel in hertz (passband bandwidth in case of a bandpass signal); S is the average received signal power over the bandwidth (in case of a carrier-modulated passband transmission, often denoted C), measured in watts (or volts squared); N is the average power of the noise and interference over the bandwidth, measured in watts (or volts squared); and S/N is the signal-to-noise ratio (SNR) or the carrier-tonoise ratio (CNR) of the communication signal to the noise and interference at the receiver (expressed as a linear power ratio, not as logarithmic decibels).

A Binary Symmetric Channel.
A binary symmetric channel (BSC) with crossover probability p is a binary input, binary output channel that flips the input bit with probability p. The BSC has a capacity of 1 − H b (p)bits per channel use, where H b is the binary entropy function to the base 2 logarithm (Figure 17).

Definitions
Shannon derived a measure of information content called the self-information or 'suprisal' of a message x: where p(x) = P rob (X = x) is the probability that message x is chosen from all possible choices in the message space X. The base of the logarithm only affects a scaling factor and, consequently, the units in which the measured information content is expressed. If the logarithm is base 2, the measure of information is expressed in units of bits. Therefore, in base 2 the self-information or 'suprisal' of a message x is: We define now a new function that we can call the 'expectancy' of the same message x which is in base 2: In fact if the probability of a message x is p(x) then the complementary probability of the complement event is 1 − p(x). Hence I 2 (x) corresponds to p(x) andĪ 2 (x) corresponds to 1 − p(x). I 2 (x) measures the surprisal of a message x that means Consequently,Ī 2 (x) is the opposite of I 2 (x) and will measure the self-information acquired from the message expectancy or likeliness to occur. Additionally, we have And the binary entropy is

Discrete and continuous message domains
In the discrete case, let = {the set of messages X} = {x 1 , x 2 , . . . , x n }. It can be ordered by a one-to-one correspondence between and the discrete countable ordered interval Z = [L b , U b ] having L b as the lower bound and U b as the upper bound and where Z is a subset of Z (the set of integers). Thus, Knowing that and Z have the same cardinal or number of elements, thus | | = | Z |. So we can write without any confusion: X ≤ x, that means all the messages in Z less or equal to x. Hence X ≤ x 3 means X = {x 1 , x 2 , x 3 }. We can also assign to each X in Z a probability measure p 1 = P rob (X = x 1 ), p 2 = P rob (X = x 2 ), . . . , p n = P rob (X = x n ) such that n k=1 p k = 1. Moreover, we can write Which denotes the sum of all messages probabilities less than or equal to the message x j in Z . As examples of a discrete random variable X are the outcome of tossing a coin or of throwing a die, or the sum of two thrown dice.
If F(x) is the discrete probability cumulative distribution function (CDF) of the random variable of messages X, then let where p(x) denotes the sum of all messages probabilities less than or equal to the message x in Z .
The complement probability is: and denotes the sum of all messages probabilities greater than the message x in Z . The continuous case is an extension of the discrete case where Z is replaced by the continuous uncountable dense ordered message interval R = [L b , U b ] which is a subset of R (the set of real numbers). Therefore, where p(x) denotes the sum of all messages probabilities less than or equal to the message x in R . And denotes the sum of all messages probabilities greater than the message x in R .
Hence F(x) is the continuous CDF and f (x) is the probability density function (PDF) of the random variable of messages X that can follow any possible continuous random distribution. As examples of a continuous random variable X are the lifetime of a light bulb, or the height of a building, or the length of a rod, or the body weight.

Rescaled self-information functions
Moreover, if p(x) ∈ [0, 1] then I 2 (x) andĪ 2 (x) belong to the interval [0, ∞). We can rescale both of them in the new simulation domain which is = 1 which is the complement to the probability p(L b ). Let RI 2 (x) be the rescaled I 2 (x), RĪ 2 (x) be the rescaledĪ 2 (x) and the simulation rescaling factor. Therefore: Consequently:     Figure 20).
Additionally, at the point x = Md = the Median of the message distribution and for any probability distribution we have p(Md) = 1 − p(Md) = 0.5, therefore the binary entropy is maximum since it is equal to:

The real binary entropy H R b = H b 6.1.1. The real binary entropy H R b as a function of all the CPP parameters
In the real probability set we have: And from CPP we have Then Chf = −2p + 2p 2 ⇒ 2p 2 − 2p − Chf = 0 which is a second degree equation function of p. So the discriminant is: = 4 + 8Chf . Since −0.5 ≤ Chf ≤ 0 then 0 ≤ ≤ 4, therefore the two real roots are: Knowing that: Therefore, since log 2 (x/y) = log 2 x − log 2 y. But log 2 2 = 1, Hence, But log 2 (xy) = log 2 x + log 2 y and log 2 (x/y) = log 2 x − log 2 y But Pc 2 = 1 = Pc = p 1 + p 2 , therefore we get the final formula of H R b as a function of all the CPP parameters: In fact and to check: if p = 0 or p = 1 then from the CPP equations above we have: Chf = 0 = MChf and DOK = 1 with Pc = 1always. Therefore: By L'Hôpital's rule: lim p→0+ p log 2 p = 0 − , and log 2 2 = 1 then: Moreover, if p = 0.5 then from the CPP equations above we have: Chf = −0.5, MChf = 0.5 and DOK = 0.5 with Pc = 1as always. Therefore: Since log 2 1 2 = −log 2 2 = −1. Then:

The real binary entropy H R b = H b as a function of chf alone:
As Pc 2 = DOK − Chf = 1 ⇒ DOK = 1 + Chf and MChf = −Chf and Pc = 1 always, then ( Figure 21):

The real binary entropy H R b = H b as a function of MChf and DOK alone:
As Pc 2 = DOK − Chf = 1 ⇒ Chf = DOK − 1and Pc = 1 always, then:

Definition of H M b
From information theory we have: In the real probability set we have p = P r and 1 − p = 1 − P r therefore In the imaginary probability set we have p = P m = i(1 − P r ). If Check that P r + P m /i = p + (1 − p) = 1 which is true according to axiom 6. If Check that P r + P m /i = (1 − p) + p = 1 which is also true according to axiom 6. Therefore Since p = P r and 1 − p = 1 − P r then

The relation between H M b and H R b :
We have: Since log 2 xy = log 2 x + log 2 y then Now using Euler's formula: e iθ = cos θ + i sin θ then for θ = π 2 + 2kπ where k ∈ Z (the set of all integers), we get since log 2 x θ = θ log 2 x and log 2 e = Lne Ln2 = 1 Ln2 .
Note that for k = 0 ⇒ −log 2 (i i ) = 2.26618. For k = 1 ⇒ −log 2 (i i ) = 11.3309. For k = −1 ⇒ −log 2 (i i ) = −6.79854. Thus we conclude that Therefore H M b (p) curve in the complex plane lies always in the constant real planes Re [H M b (p)] depending on the values of k ∈ Z and in these fixed planes it is equal to iH R b (p).

The complementary real binary entropȳ H R b = H R b :
The real probability in the probability set is P r and its real entropy is H R b = H b . According to axiom 6, the corresponding imaginary probability in the imaginary probability set is P m = i(1 − P r ) and its complex entropy is H M b . The related real probability to in the set is P m /i = (1 − P r ) and its real entropy isH R b . We have We have P m = i(1 − P r ) then P m /i = 1 − P r and 1 − P m /i = P r . Therefore,

The real negative binary entropy NegH R b :
The real negative binary entropy is defined by: is minimum and vice versa. So when H R b = 1 = maximum for P r = 0.5 then NegH R b = −1 = minimum. Also, both of them are zero when P r = 0 (impossible event) or P r = 1 (sure event), so when H R b = 0 = minimum then NegH R b = 0 = maximum. Therefore, if H R b measures the amount of disorder, of uncertainty, of unpredictability, and of information gain in a message then since NegH R b = −H R b , that means the opposite of H R b , NegH R b measures the amount of order, of certainty, of predictability, and of information loss in a message.

The binary entropy H C b in the set :
In the probability set we have p = P C = 1, therefore since by L'Hôpital's rule: lim p→0+ plog 2 p = 0 − , and log 2 1 = 0. (Figure 24).

Relations between the binary entropies H R b ,H R b ,NegH R b ,H M b , andH C b :
Note that:  6.7.1.1. The first derivative of H . We have also Therefore: since Chf (p), MChf (p), and DOK(p) are not monotonous functions of the strictly increasing variable p ∈ [0, 1].
Moreover, since 0.5 ≤ DOK ≤ 1 and −0.5 ≤ Chf ≤ 0 and 0 ≤ MChf ≤ 0.5 for ∀p ∈ [0, 1] then: Hence we have the maximum of H R b (p) at this point which is absolutely true for any probability distribution.
If p = 0 then Chf = MChf = 0 and DOK = 1, therefore That means that at p = 0 the tangent to H R b (p) is vertical and H R b (p) is increasing. If p = 1 then Chf = MChf = 0 and DOK = 1, therefore That means that at p = 1 the tangent to H R b (p) is vertical and H R b (p) is decreasing.

The first derivative ofH
p) then we will reach all the same conclusions as for H R b (p).
Hence we have the minimum of NegH R b (p) at this point which is absolutely true for any probability distribution.
If p = 0 then Chf = MChf = 0 and DOK = 1, therefore That means that at p = 0 the tangent to NegH R b (p) is vertical and NegH R b (p) is decreasing. If p = 1 then Chf = MChf = 0 and DOK = 1, therefore That means that at p = 1 the tangent to NegH R b (p) is vertical and NegH R b (p)is increasing.

The first derivative of H M b .
Since is a constant term so its first derivative equals 0, therefore the first derivative of H M b (p) is similar to that of H R b (p)but in the complex plane. Hence, we will reach all the same conclusions as for H R b (p).

The first derivative of H C b .
Since H C b (p) = 0 then its first derivative relatively to p is: dH C b (DOK) dp = 0, ∀DOK ∈ [0.5, 1].
That means H C b (p) is a horizontal line which is always equal to zero.  The figures above illustrate all these calculations. (Figures 25-28). functions  6.7.2.1. The second derivative of H

The second derivatives of the binary entropy
Since p ∈ [0, 1] then p ≥ 0 and 1 − p ≥ 0 as well as Ln2 > 0. Therefore is a curve concave down everywhere, which is absolutely true for any probability distribution.
We have p = 1+ Figure 29. The second derivatives of the binary entropy functions for the standard Gaussian normal distribution.

Graphical representations
The figures below verify and illustrate all the computations and calculations made (Figures 36-42).

Analysis
We have always: 0 ≤ p ≤ 1, 0 ≤ 1 − p ≤ 1, 0 ≤ P r ≤ 1, and 0 ≤ P m ≤ i then: Ln2 π 2 + 2kπ , k ∈ Z.  If P r = 0 or P r = 1 then P m = 0 or P m = i therefore And this for any probability distribution of P r (X). Moreover, we have always: This is due to the symmetric nature in the expression of H R b (P r ) = H b (P r ) which leads to continuous compensations in its formula between p and 1 − p ∀p : 0 ≤ p ≤ 1 as well as between P r and P m /i = 1 − P r ∀P r , ∀P m /i : 0 ≤ P r , P m /i ≤ 1.

Information theory)
We have from Shannon's information theory C BSC (p) = 1 − H b (p) where p is the probability of bits flips and BSC = binary symmetric channel.
The channel capacity in corresponding to the real probability P r = p will be The channel capacity in corresponding to the imaginary probability Ln2 π 2 + 2kπ where k ∈ Z, consequently: for the standard Gaussian normal distribution.
The channel capacity in corresponding to the complementary real probability P m /i = 1 − P r = 1 − p will bē The channel capacity in corresponding to the probability Pc = p = 1 will be The figures below illustrate all the computations and formulas deduced (Figures 43-46).

The evaluation of the new paradigm parameters
The cumulative distribution function (CDF) of the discrete or continuous random variable of the message x is denoted by F(x). Then the new CPP parameters are the following: The real probability: The imaginary probability: The real complementary probability: The complex random vector: The Degree of Our Knowledge (DOK): The Magnitude of the Chaotic Factor (MChf ): For any value of the random variable x, the probability expressed in the complex set is: Hence, the prediction of the outcome of the message random variable x in is permanently certain and absolutely deterministic.
The surprisal of the message x in base 2 is: The rescaled surprisal of the message x in base 2 is: The expectancy of the same message x in base 2: The rescaled expectancy of the same message x in base 2: The real binary entropy in is: The complex binary entropy in is: (65) The real complementary binary entropy in is: The real negative binary entropy in is: The real binary entropy in is: The real BSC capacity in is: The complex BSC capacity in is: The real complementary BSC capacity in is: The real BSC capacity in is: Let us consider thereafter different discrete and continuous probability distributions to simulate the probability cumulative distribution function P r (x) = F(x) and to draw, to visualise, as well as to quantify all the information theory new paradigm parameters.

Flowchart of the complex probability information theory paradigm
The following flowchart summarises all the procedures of the proposed complex probability prognostic model for

The new paradigm applied to various discrete and continuous probability distributions
In this section, the simulation of the novel CPP model for various discrete and continuous random distributions will be done. Note that all the numerical values found in the paradigm functions analysis for all the simulations were computed using the 64-Bit MATLAB version 2017 software. It is important to mention here that a few important and well-known probability distributions were considered although the original CPP model can be applied to any random distribution beside the ten probability cases below. This will lead to similar results and conclusions. Hence, the new paradigm is successful with any discrete or continuous random case (refer to the definitions, graphs, and axioms in section III).

The binomial probability distribution
The probability density function (PDF) of this discrete distribution is: Taking in our simulation N = 16 and p + q = 1, then: The mean of this binomial random distribution is: μ = Np = 16 × 0.5 = 8.
The standard deviation is: The cumulative distribution function (CDF) is: I have taken the domain for the binomial random variable to be: The real probabilityP r (x) is: The complementary probability P m (x)/i is:  The other parameters are calculated from the CPP paradigm (refer to section VIII) (Figures 47-49).

The poisson probability distribution
The probability density function (PDF) of this discrete distribution is: and the cumulative distribution function (CDF) is: The real probability P r (x) is: The complementary probability P m (x)/i is: The mean of this Poisson random distribution is: μ = λ = 10.6685.
The standard deviation is: The median Md is ≈ λ + 1/3 − 0.02/λ = 10. The rescaled surprisal and expectancy self-information functions with the simulation rescaling factor = 14.82 are: The other parameters are calculated from the CPP paradigm (refer to section VIII) (Figures 50-52).

The uniform probability distribution
The probability density function (PDF) of this continuous distribution is: and the cumulative distribution function (CDF) is: I have taken the domain for the continuous uniform random variable to be: The complementary probability P m (x)/i is: The mean of this continuous uniform random distribution is: The standard deviation is: The Median is Md = 0. The rescaled surprisal and expectancy self-information functions with the simulation rescaling factor = 8.7 are: The other parameters are calculated from the CPP paradigm (refer to section VIII) (Figures 53-55).

The standard Gaussian normal probability distribution
The probability density function (PDF) of this continuous distribution is: (79) and the cumulative distribution function (CDF) is: The domain for this standard Gaussian normal variable is: x ∈ [L b = −4, U b = 4] and I have taken dx = 0.001.
The real probability P r (x) is: The complementary probability P m (x)/i is: In the simulations, the mean of this standard normal random distribution is μ = 0.
The standard deviation is σ = 1 . The median is Md = 0 . The rescaled surprisal and expectancy self-information functions with the simulation rescaling factor = 15 are: The other parameters are calculated from the CPP paradigm (refer to section VIII) (Figures 56-58).

The exponential probability distribution
The probability density function (PDF) of this continuous distribution is: Where μ is the parameter of the distribution and is equal to 1 here. The cumulative distribution function (CDF) is:    The real probability P r (x) is: The complementary probability P m (x)/i is: In the simulations, the mean of this exponential random distribution is μ = 1.
The standard deviation is σ = 1 . The median is Md = Ln2 = 0.693147 . The rescaled surprisal and expectancy self-information functions with the simulation rescaling factor = 10 are: The other parameters are calculated from the CPP paradigm (refer to section VIII) (Figures 59-61.)

The gamma probability distribution
The probability density function (PDF) of this continuous distribution is: for 0 < x < ∞, and α, β > 0 Where α is the shape parameter = 1 and β is the scale parameter = 1.75. (α) is a complete gamma function.   And the cumulative distribution function (CDF) is: The domain for this gamma variable is: x ∈ (L b = 0, U b = 14] and I have taken dx = 0.01. The real probability P r (x) is: The complementary probability P m (x)/i is: In the simulations, the mean of this gamma random distribution is μ = αβ = 1.75.
The standard deviation is σ = αβ 2 = 1.75. There is no simple closed form for the median Md of the gamma distribution. Hence, from the simulations we have Md = 1.21.
A graph for the surprisal and expectancy selfinformation functions for this distribution can be drawn that is similar to the previous graphs for other probability distributions. The other parameters are calculated from the CPP paradigm (refer to section VIII) (Figures 62 and  63).
The cumulative distribution function (CDF) is: The domain for this beta variable is: and I have taken dx = 0.0001. The real probability P r (x) is: The complementary probability P m (x)/i is: In the simulations, the mean of this beta distribution is μ = α α+β = 3/1.2 = 2.5. The standard deviation is σ = The other parameters are calculated from the CPP paradigm (refer to section VIII) (Figures 64-66).

The chi2 probability distribution
The probability density function (PDF) of this continuous distribution is: Where ν > 0 is the number of the degree of freedom. ν = 4 here.
The standard deviation is σ = 0.770759 . The median Md of the F distribution is ≈ 0.942. A graph for the surprisal and expectancy selfinformation functions for this distribution can be drawn that is similar to the previous graphs for other probability distributions. The other parameters are calculated from the CPP paradigm (refer to section VIII) (Figures 69  and 70).

The student's t probability distribution
The probability density function (PDF) of this continuous distribution is: Where ν > 0 is the number of the degrees of freedom. ν = 3 here. (·) is the gamma function. The cumulative distribution function (CDF) is: The domain for this student's t variable is: x ∈ [L b = −10, U b = 10] and I have taken dx = 0.001 . The real probability P r (x) is: The complementary probability P m (x)/i is:  In the simulations, the mean of this t distribution is μ = 0 .
The standard deviation is σ = √ ν/(ν − 2) = 1.73205. The median Md of the t distribution is = 0. A graph for the surprisal and expectancy selfinformation functions for this distribution can be drawn that is similar to the previous graphs for other probability distributions.
The other parameters are calculated from the CPP paradigm (refer to section VIII) (Figures 71 and 72).

Final analysis
In the complex set we have the entropy always equal to 0, so no loss no gain but complete conservation of x ↑⇒ P r ↑ Im(Z)↓ x = Median = Min = Min = Max = + 0.5 = 0.5 + 0.5i = 1 = 1 = 1 P r = 0.5 = + 0.5 = −0.5 = + 0.5 information. The Lavoisier principle in chemistry and science affirms that mass and energy are conserved. The Law of Conservation of Mass (or Matter) in a chemical reaction can be stated thus: In a chemical reaction, matter is neither created nor destroyed. Knowing that it was discovered by Antoine Laurent Lavoisier (1743-94) about 1785. Therefore, it applies also to information theory. Moreover, in we have parallel planes and parallel similar curves for entropy and channel capacity. In , we have disorder, uncertainty, and unpredictability. In we have order, certainty, and predictability since Pc = 1 permanently and entropy = 0 constantly. Additionally, in we have chaos and imperfect and incomplete knowledge or partial ignorance. In we have chaos always equal to 0 and DOK = 1 continuously, thus complete and perfect and total knowledge of the random message and channel.
Furthermore, the extension of all random and nondeterministic phenomena in to the set leads to certain knowledge and sure events since DOK = 1 and Pc = 1. Consequently, no randomness exists in and all phenomena are deterministic in this set. Therefore, in prognostic is assured and definite. Table 1 summarises the complex probability paradigm prognostic functions for any probability distribution (↑ = increases and ↓ = decreases). Table 2 summarises the complex probability paradigm prognostic entropies for any probability distribution. Table 3 summarises the complex probability paradigm prognostic BSC capacities for any probability distribution.
Accordingly, at each instant in the novel prognostic model, the random entropy and channel capacity are certainly predicted in the complex set with Pc 2 = DOK -Chf = DOK + MChf maintained as equal to one through a continuous compensation between DOK and Chf. This compensation is from the instant x = L b until the instant x = U b . We can understand also that DOK is the measure of our certain knowledge (100% probability) about the expected event, it does not include any uncertain knowledge (with a probability less than 100%). We can see that in computing Pc 2 we have eliminated and subtracted in the equation above all the random factors and chaos (Chf ) from our random experiment, hence no chaos exists in , it only exists (if it does) in ; therefore, this has yielded a 100% deterministic experiment and outcome in since the probability Pc is continuously equal to 1. This is one of the advantages of extending to and hence of working in = + . Hence, in the novel prognostic model, our knowledge of all the parameters and indicators (I 2 ,Ī 2 , H b , C, etc . . . ) is always perfect, constantly complete, and totally predictable since Pc = 1 permanently, independently of any probability profile or random factors.

Conclusion and perspectives
In the current paper we applied and linked the theory of Extended Kolmogorov Axioms to Claude Shannon's information theory. Hence, a tight bond between the new paradigm and quantities of information, entropies, and channel capacities was established. Thus, the theory of 'Complex Probability' was developed beyond the scope of my previous nine papers on this topic.
Moreover, as it was proved and illustrated in the new model, when x = L b or x = U b then the degree of our knowledge (DOK) is one and the chaotic factor (Chf and MChf ) is 0 since the state of the random message and channel is totally known. During the process of message transmission [L b < (Message x) < U b ] we have: 0.5 < DOK < 1, -0.5 < Chf < 0, and 0 < MChf < 0.5. Notice that during this whole process we have always Pc 2 = DOK -Chf = DOK + MChf = 1, that means that the phenomenon which seems to be random and stochastic in is now deterministic and certain in = + , and this after adding to the contributions of and hence after subtracting the chaotic factor from the degree of our knowledge. Furthermore, the probabilities of the message flips corresponding to each instance of x have been determined in the probability sets , , and byP r , P m , and Pc respectively. Therefore, at each instance of x, the information theory parameters I 2 ,Ī 2 , H b , C, etc . . . are surely predicted in the complex set with Pc maintained as equal to 1 permanently. Furthermore, using all these illustrated graphs and simulations throughout the whole paper, we can visualise and quantify both the system chaos (Chf and MChf ) and the certain knowledge (DOK and Pc) of the information theory model. This is certainly very interesting and fruitful and shows once again the benefits of extending Kolmogorov's axioms and thus the originality and usefulness of this new field in applied mathematics and prognostic that can be called verily: 'The Complex Probability Paradigm'.
It is important to mention in the conclusion that a few important and well-known probability distributions were considered in the current research paper although the original CPP model can be applied to any random distribution. This will lead to similar results and conclusions and proves the success of my novel paradigm.
As a prospective and future work, it is planned to more develop the novel proposed prognostic paradigm and to apply it to a wide set of stochastic and random systems like the analytic prognostic of vehicle suspensions systems and of petrochemical pipelines (in their three modes: unburied, buried, and offshore) under the linear and nonlinear damage accumulation cases.

Disclosure statement
No potential conflict of interest was reported by the author(s).