The paradigm of complex probability and Monte Carlo methods

In 1933, Andrey Nikolaevich Kolmogorov established the system of five axioms that define the concept of mathematical probability. This system can be developed to include the set of imaginary numbers and this by adding a supplementary three original axioms. Therefore, any experiment can be performed in the set of complex probabilities which is the summation of the set of real probabilities and the set of imaginary probabilities. The purpose here is to include additional imaginary dimensions to the experiment taking place in the ‘real’ laboratory in and hence to evaluate all the probabilities. Consequently, the probability in the entire set  =  +  is permanently equal to one no matter what the stochastic distribution of the input random variable in is, therefore the outcome of the probabilistic experiment in can be determined perfectly. This is due to the fact that the probability in is calculated after subtracting from the degree of our knowledge the chaotic factor of the random experiment. This novel complex probability paradigm will be applied to the classical probabilistic Monte Carlo numerical methods and to prove as well the convergence of these stochastic procedures in an original way.


R
real set of events M imaginary set of events C complex set of events i the imaginary number where i = √ −1 or i 2 = −1 EKA Extended Kolmogorov's Axioms CPP Complex Probability Paradigm P rob probability of any event P r probability in the real set R = probability of convergence in R P m probability in the imaginary set M corresponding to the real probability in R = probability of divergence in M P c probability of an event in R with its associated event in M = probability in the complex probability set C R E the exact result of the random experiment R A the approximate result of the random experiment z complex probability number = sum of P r and P m = complex random vector CONTACT Abdo Abou Jaoude abdoaj@idm.net.lb DOK = |z| 2 the degree of our knowledge of the random system or experiment, it is the square of the norm of z Chf the chaotic factor of z MChf magnitude of the chaotic factor of z N number of random vectors = number of iterations cycles N C number of random vectors = number of iterations cycles till the convergence of Monte Carlo method to R E Z the resultant complex random vector = N j=1 z j DOK Z = |Z| 2 N 2 the degree of our knowledge of the whole stochastic system Chf Z = Chf N 2 the chaotic factor of the whole stochastic system MChf Z magnitude of the chaotic factor of the whole stochastic system Z U the resultant complex random vector corresponding to a uniform random distribution DOK Z U the degree of our knowledge of the whole stochastic system corresponding to a uniform random distribution

Introduction
Firstly, in this introductory section an overview of Monte Carlo methods will be done. Before the Monte Carlo method was developed, simulations tested a previously understood deterministic problem, and statistical sampling was used to estimate uncertainties in the simulations. Monte Carlo simulations invert this approach, solving deterministic problems using a probabilistic analog (one can refer to Simulated annealing). An early variant of the Monte Carlo method can be seen in the Buffon's needle experiment, in which π can be estimated by dropping needles on a floor made of parallel and equidistant strips. In the 1930s, Enrico Fermi first experimented with the Monte Carlo method while studying neutron diffusion, but did not publish anything on it (Metropolis, 1987).
The modern version of the Markov Chain Monte Carlo method was invented in the late 1940s by Stanislaw Ulam, while he was working on nuclear weapons projects at the Los Alamos National Laboratory. Immediately after Ulam's breakthrough, John von Neumann understood its importance and programmed the ENIAC computer to carry out Monte Carlo calculations. In 1946, physicists at Los Alamos Scientific Laboratory were investigating radiation shielding and the distance that neutrons would likely travel through various materials. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus, and how much energy the neutron was likely to give off following a collision, the Los Alamos physicists were unable to solve the problem using conventional, deterministic mathematical methods. Ulam had the idea of using random experiments. He recounts his inspiration as follows: The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than 'abstract thinking' might not be to lay it out say one hundred times and simply observe and count the number of successful plays. This was already possible to envisage with the beginning of the new era of fast computers, and I immediately thought of problems of neutron diffusion and other questions of mathematical physics, and more generally how to change processes described by certain differential equations into an equivalent form interpretable as a succession of random operations. Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations. (Eckhardt, 1987) Being secret, the work of von Neumann and Ulam required a code name (Mazhdrakov, Benov, & Valkanov, 2018). A colleague of von Neumann and Ulam, Nicholas Metropolis, suggested using the name Monte Carlo, which refers to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money from relatives to gamble (Metropolis, 1987). Using lists of 'truly random' random numbers was extremely slow, but von Neumann developed a way to calculate pseudorandom numbers, using the middle-square method. Though this method has been criticized as crude, von Neumann was aware of this: he justified it as being faster than any other method at his disposal, and also noted that when it went awry it did so obviously, unlike methods that could be subtly incorrect (Peragine, 2013).
Monte Carlo methods were central to the simulations required for the Manhattan Project, though severely limited by the computational tools at the time. In the 1950s they were used at Los Alamos for early work relating to the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.
The theory of more sophisticated mean field type particle Monte Carlo methods had certainly started by the mid-1960s, with the work of Henry P. McKean Jr. on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics (McKean, 1967(McKean, , 1966. We also quote an earlier pioneering article by Theodore E. Harris and Herman Kahn, published in 1951, using mean field genetic-type Monte Carlo methods for estimating particle transmission energies (Herman & Theodore, 1951). Mean field genetic type Monte Carlo methodologies are also used as heuristic natural search algorithms (also known as Metaheuristic) in evolutionary computing. The origins of these mean field computational techniques can be traced to 1950 and 1954 with the work of Alan Turing on genetic type mutation-selection learning machines (Turing, 1950) and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey (Barricelli, 1954(Barricelli, , 1957. Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods can also be interpreted as a mean field particle Monte Carlo approximation of Feynman-Kac path integrals (Assaraf, Caffarel, & Khelif, 2000;Caffarel, Ceperley, & Kalos, 1993;Del Moral, 2003;Del Moral, 2004;Del Moral & Miclo, 2000a, 2000bHetherington, 1984). The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean field particle interpretation of neutron-chain reactions (Fermi & Richtmyer, 1948), but the first heuristic-like and genetic type particle algorithm (also known as Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984(Hetherington, 1984 In molecular chemistry, the use of genetic heuristic-like particle methodologies (also known as pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Rosenbluth and Rosenbluth (1955).
The use of Sequential Monte Carlo in advanced signal processing and Bayesian inference is more recent. It was in 1993, that Gordon et al., published in their seminal work (Gordon, Salmond, & Smith, 1993) the first application of a Monte Carlo resampling algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state-space or the noise of the system. We also quote another pioneering article in this field of Genshiro Kitagawa on a related 'Monte Carlo filter' (Kitagawa, 1996), and the ones by Pierre Del  and Carvalho, Del Moral, Monin, and Salut (1997) on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in the early 1989-1992 by P. Del Moral, J.C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on RADAR/SONAR and GPS signal processing problems (Del Moral, Noyer, Rigal, & Salut, 1992c;Del Moral, Rigal, & Salut, 1991, September;Del Moral, Rigal, & Salut, 1991, April;Del Moral, Rigal, & Salut, 1992, October;Del Moral, Rigal, & Salut, 1992, January;Del Moral, Rigal, & Salut, 1993). These Sequential Monte Carlo methodologies can be interpreted as an acceptance-rejection sampler equipped with an interacting recycling mechanism.
From 1950 to 1996, all the publications on Sequential Monte Carlo methodologies including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and on genealogical and ancestral tree-based algorithms. The mathematical foundations and the first rigorous analysis of these particle algorithms are due to Del Moral ( , 1998 in 1996. Branching type particle methodologies with varying population sizes were also developed in the end of the 1990s by Dan Crisan, Jessica Gaines and Terry Lyons (Crisan & Lyons, 1997Crisan, Gaines, & Lyons, 1998) and by Crisan, Del Moral, and Lyons (1999), Further developments in this field were developed in 2000 by P. Del Guionnet, 1999, 2001;Del Moral & Miclo, 2000a).
Finally, and to conclude, this research work is organized as follows: After the introduction in section 1, the purpose and the advantages of the present work are presented in section 2. Afterward, in section 3, we will explain and illustrate the complex probability paradigm with its original parameters and interpretation. In section 4, the Monte Carlo techniques of integration and simulation will be explained. In section 5, I will extend Monte Carlo methods to the imaginary and complex probability sets and hence link this concept to my novel complex probability paradigm. Moreover, in section 6, I will prove the convergence of Monte Carlo methods using the concept of the resultant complex random vector Z. Furthermore, in section 7 we will evaluate the original paradigm parameters and in section 8 a flowchart of the complex probability and Monte Carlo methods prognostic model will be drawn. Additionally, in section 9 simulations of Monte Carlo methods will be accomplished in the continuous and discrete cases. Finally, I conclude the work by doing a comprehensive summary in section 10, and then present the list of references cited in the current research work.

The purpose and the advantages of the present work
In this section we will present the purpose and the advantages of the current research work. Computing probabilities is the main work of classical probability theory. Adding new dimensions to the stochastic experiments will lead to a deterministic expression of probability theory. This is the original idea at the foundations of this work. Actually, the theory of probability is a nondeterministic system in its essence; that means that the events outcomes are due to chance and randomness. The addition of novel imaginary dimensions to the chaotic experiment occurring in the set R will yield a deterministic experiment and hence a stochastic event will have a certain result in the complex probability set C. If the random event becomes completely predictable then we will be fully knowledgeable to predict the outcome of stochastic experiments that arise in the real world in all stochastic processes. Consequently, the work that has been accomplished here was to extend the real probabilities set R to the deterministic complex probabilities set C = R + M by including the contributions of the set M which is the imaginary set of probabilities. Therefore, since this extension was found to be successful, then a novel paradigm of stochastic sciences and prognostic was laid down in which all stochastic phenomena in R was expressed deterministically. I called this original model 'the Complex Probability Paradigm' that was initiated and illustrated in my twelve research publications (Abou Jaoude, 2013a, 2013b, 2014, 2015a, 2015b, 2016a, 2016b, 2017a, 2017b, 2017c, 2018Abou Jaoude, El-Tawil, & Kadry, 2010).
Accordingly, the advantages and the purpose of the current paper are to: (1) Extend classical probability theory to the set of complex numbers, therefore to link the theory of probability to the field of complex variables and analysis. This job was started and elaborated in my previous twelve papers.
(2) Apply the new axioms of probability and paradigm to Monte Carlo methods.
(3) Show that all stochastic phenomena can be expressed deterministically in the set of complex probabilities C. (4) Measure and compute both the degree of our knowledge and the chaotic factor of Monte Carlo methods. (5) Draw and illustrate the graphs of the parameters and functions of the original paradigm corresponding to Monte Carlo methods. (6) Show that the classical concept of probability is always equal to one in the complex set; hence, no randomness, no chaos, no uncertainty, no ignorance, no disorder, and no unpredictability exist in: Prove the convergence of the stochastic Monte Carlo procedures in an original way by using the newly defined axioms and paradigm. (8) Pave the way to implement this novel model to other areas in stochastic processes and to the field of prognostics. These will be the topics of my future research works.
Concerning some applications of the original elaborated paradigm and as a future work, it can be applied to any random phenomena using Monte Carlo methods whether in the discrete or in the continuous cases.
Furthermore, compared with existing literature, the main contribution of the present research work is to apply the novel paradigm of complex probability to the concepts and techniques of the stochastic Monte Carlo methods and simulations.
The following figure shows the main purposes of the Complex Probability Paradigm (CPP) (Figure 1).

The original Andrey Nikolaevich Kolmogorov system of axioms
The simplicity of Kolmogorov's system of axioms may be surprising. Let E be a collection of elements {E 1 , E 2 , . . . } called elementary events and let F be a set of subsets of E called random events. The five axioms for a finite set E are (Benton, 1966a(Benton, , 1966bFeller, 1968;Freund, 1973;Montgomery & Runger, 2003;Walpole, Myers, Myers, & Ye, 2002): Axiom 1: F is a field of sets. Axiom 2: F contains the set E. Axiom 3: A non-negative real number P rob (A), called the probability of A, is assigned to each set A in F. We have always 0 ≤ P rob (A) ≤ 1.
Axiom 4: P rob (E) equals 1. Axiom 5: If A and B have no elements in common, the number assigned to their union is: hence, we say that A and B are disjoint; otherwise, we have: And we say also that: P rob (A ∩ B) = P rob (A) × P rob (B/A) = P rob (B) × P rob (A/B) which is the conditional probability. If both A and B are independent then: P rob (A ∩ B) = P rob (A) × P rob (B).
Moreover, we can generalize and say that for N disjoint (mutually exclusive) events A 1 , A 2 , . . . , A j , . . . , A N (for 1 ≤ j ≤ N), we have the following additivity rule: And we say also that for N independent events A 1 , A 2 , . . . , A j , . . . , A N (for 1 ≤ j ≤ N), we have the following product rule:

Adding the imaginary part M
Now, we can add to this system of axioms an imaginary part such that: Axiom 6: Let P m = i × (1 − P r ) be the probability of an associated event in M (the imaginary part) to the event A in R (the real part). It follows that P r + P m /i = 1 where i is the imaginary number with i = √ −1 or i 2 = −1. Axiom 7: We construct the complex number or vector Z = P r + P m = P r + i(1 − P r ) having a norm |Z| such that: Axiom 8: Let Pc denote the probability of an event in the complex probability universe C where C = R + M. We say that Pc is the probability of an event A in R with its associated event in M such that: Pc 2 = P r + P m i 2 = |Z| 2 − 2iP r P m and is always equal to 1.
We can see that by taking into consideration the set of imaginary probabilities we added three new and original axioms and consequently the system of axioms defined by Kolmogorov was hence expanded to encompass the set of imaginary numbers.

The purpose of extending the axioms
After adding the new three axioms, it becomes clear that the addition of the imaginary dimensions to the real stochastic experiment yields a probability always equal to one in the complex probability set C. Actually, we will understand directly this result when we realize that the set of probabilities is formed now of two parts: the first part is real and the second part is imaginary. The stochastic event that is happening in the set R of real probabilities (like in the experiment of coin tossing and getting a tail or a head) has a corresponding real probability P r and a corresponding imaginary probability P m . In addition, let M be the set of imaginary probabilities and let |Z| 2 be the Degree of Our Knowledge (DOK for short) of this experiment. According to the axioms of Kolmogorov, P r is always the probability of the phenomenon in the set R (Barrow, 1992;Bogdanov & Bogdanov, 2009;Srinivasan & Mehata, 1988;Stewart, 1996;Stewart, 2002;Stewart, 2012).
• In fact, a total ignorance of the set M leads to: P rob (event) = P r = 0.5, P m = P rob (imaginary part) = 0.5i, and |Z| 2 = DOK in this case is equal to: 1 − 2P r (1 − P r ) = 1 − (2 × 0.5) × (1 − 0.5) = 0.5 • Conversely, a total knowledge of the set in R leads to: P rob (event) = P r = 1 and P m = P rob (imaginary part) = 0. Here we have DOK = 1 − (2 × 1) × (1 − 1) = 1 because the phenomenon is totally known, that is, all the variables and laws affecting the experiment are determined completely, therefore; our degree of our knowledge (DOK) of the system is 1 = 100%. • Now, if we are sure that an event will never happen i.e. like 'getting nothing' (the empty set), P r is accordingly = 0, that is, the event will never occur in R. P m will be equal to: are sure that the event of getting nothing will never happen; therefore, the Degree of Our Knowledge (DOK) of the system is 1 = 100%.
We can deduce that we have always: 0.5 ≤ |Z| 2 ≤ 1, ∀ P r : 0 ≤ P r ≤ 1 and |Z| 2 = DOK = P 2 r + (P m /i) 2 , where 0 ≤ P r , P m /i ≤ 1. And what is crucial is that in all cases we have: Actually, according to an experimenter in R, the phenomenon is random: the experimenter ignores the outcome of the chaotic phenomenon. Each outcome will be assigned a probability P r and he will say that the outcome is nondeterministic. But in the complex probability universe C = R + M, the outcome of the random phenomenon will be totally predicted by the observer since the contributions of the set M were taken into consideration, so this will give: Therefore Pc is always equal to 1. Actually, adding the imaginary set to our stochastic phenomenon leads to the elimination of randomness, of ignorance, and of nondeterminism. Subsequently, conducting experiments of this class of phenomena in the set C is of great importance since we will be able to foretell with certainty the output of all random phenomenon. In fact, conducting experiments in the set R leads to uncertainty and unpredictability. So, we place ourselves in the set C instead of placing ourselves in the set R, then study the random events, since in C we take into consideration all the contributions of the set M and therefore a deterministic study of the stochastic experiment becomes possible. Conversely, by taking into consideration the contributions of the probability set M we place ourselves in the set C and by disregarding M we restrict our experiment to nondeterministic events in R (Bell, 1992;Bogdanov & Figure 2. Chf, DOK, and Pc for any probability distribution in 2D. Bogdanov, 2010;Bogdanov & Bogdanov, 2012;Bogdanov & Bogdanov, 2013;Boursin, 1986;Dacunha-Castelle, 1996;Dalmedico-Dahan, Chabert, & Chemla, 1992;Ekeland, 1991;Gleick, 1997;Van Kampen, 2006). Furthermore, we can deduce from the above axioms and definitions that: 2iP r P m will be called the Chaotic factor in our stochastic event and will be denoted accordingly by 'Chf '. We will understand why we have named this term the chaotic factor; in fact: • In case P r = 1, that means in the case of a certain event, then the chaotic factor of the event is equal to 0. • In case P r = 0, that means in the case of an impossible event, then Chf = 0. Therefore, in both two last cases, there is no chaos because the output of the event is certain and is known in advance. • In case P r = 0.5, Chf = −0.5.
So, we deduce that: −0.5 ≤ Chf ≤ 0, ∀ P r : 0 ≤ P r ≤ 1. (Figures 2-4) Consequently, what is truly interesting here is therefore we have quantified both the degree of our knowledge and the chaotic factor of any stochastic phenomenon and hence we can state accordingly:  Then we can conclude that: Pc 2 = Degree of our knowledge of the system-Chaotic factor = 1, therefore Pc = 1 permanently and constantly.
This directly leads to the following crucial conclusion: if we succeed to subtract and eliminate the chaotic factor in any stochastic phenomenon, then we will have the outcome probability always equal to one (Abou Jaoude, 2013a, 2013b, 2014, 2015a, 2015b, 2016a, 2016b, 2017a, 2017b, 2017c, 2018Abou Jaoude et al., 2010) (Dalmedico-Dahan & Peiffer, 1986Davies, 1993;Gillies, 2000;Guillen, 1995;Gullberg, 1997;Hawking, 2002Hawking, , 2005Hawking, , 2011Pickover, 2008;Science Et Vie, 1999).  The graph below illustrates the linear relation between both DOK and Chf. (Figure 5) Furthermore, we require in our present analysis the absolute value of the chaotic factor that will quantify for us the magnitude of the chaotic and stochastic influences on the random system considered which is materialized by the real probability P r and a probability density function, and which lead to an increasing or decreasing system chaos in R. This additional and original term will be denoted accordingly MChf or Magnitude of the Chaotic factor. Therefore, we define this new term by: and The graph below ( Figure 6) illustrates the linear relation between both DOK and MChf. Moreover, Figures 7-13 illustrate the graphs of Chf, MChf, DOK, and Pc as functions of the real probability P r and of the random variable X for any probability distribution and for a Weibull probability distribution (Abou Jaoude, 2013a, 2013b, 2014, 2015a, 2015b, 2016a, 2016b, 2017a, 2017b, 2017c, 2018Abou Jaoude et al., 2010).
To conclude and to summarize, in the real probability universe R our degree of our certain knowledge is regrettably imperfect, therefore we extend our study to the complex set C which embraces the contributions of both the real probabilities set R and the imaginary probabilities set M. Subsequently, this will lead to a perfect and complete degree of knowledge in the universe C = R + M (since Pc = 1). In fact, working in the complex universe C leads to a certain prediction of any random event, because in C we eliminate and subtract from the calculated degree of our knowledge the quantified chaotic factor. This will yield a probability in the universe C equal to one (Pc 2 = DOK−Chf = DOK + MChf = 1 = Pc). Many illustrations considering various continuous and discrete probability distributions in my twelve previous research papers verify this hypothesis and novel paradigm (Abou Jaoude, 2013a, 2013b, 2014, 2015a, 2015b, 2016a, 2016b, 2017a, 2017b, 2017c, 2018Abou Jaoude et al., 2010). The Extended Kolmogorov Axioms (EKA for short) or the Complex Probability Paradigm (CPP for short) can be summarized and shown in the following figure ( Figure 14):  with random numbers. This name, after the casino at Monaco, was first applied around 1944 to the method of solving deterministic problems by reformulating them in terms of a problem with random elements which could then be solved by large-scale sampling. But, by extension, the term has come to mean any simulation that uses random numbers.

The Monte Carlo techniques of integration and simulation (Gentle
The development and proliferation of computers has led to the widespread use of Monte Carlo methods in virtually all branches of science, ranging from nuclear physics (where computer-aided Monte Carlo was first applied) to astrophysics, biology, engineering, medicine, operations research, and the social sciences.
The Monte Carlo Method of solving problems by using random numbers in a computer -either by direct simulation of physical or statistical problems or by reformulating deterministic problems in terms of one incorporating randomness -has become one of the most important tools of applied mathematics and computer science. A significant proportion of articles in technical journals in such fields as physics, chemistry, and statistics contain articles reporting results of Monte Carlo simulations or suggestions on how they might be applied. Some journals are devoted almost entirely to Monte Carlo problems in their fields. Studies in the formation of the universe or of stars and their planetary systems use Monte Carlo techniques. Studies in genetics, the biochemistry of DNA, and the random configuration and knotting of biological molecules are studied by Monte Carlo methods. In number theory, Monte Carlo methods play an important role in determining primality or factoring of very large integers far beyond the range of deterministic methods. Several important new statistical techniques such as 'bootstrapping' and 'jackknifing' are based on Monte Carlo methods.
Hence, the role of Monte Carlo methods and simulation in all of the sciences has increased in importance during the past several years. These methods play a central role in the rapidly developing subdisciplines of the computational physical sciences, the computational life sciences, and the other computational sciences. Therefore, the growing power of computers and the evolving simulation methodology have led to the recognition of computation as a third approach for advancing the natural sciences, together with theory and traditional experimentation. At the kernel of Monte Carlo simulation is random number generation.  Now we turn to the approximation of a definite integral by the Monte Carlo method. If we select the first N elements x 1 , x 2 , . . . , x N from a random sequence in the interval (0,1), then: Here the integral is approximated by the average of N numbers When this is actually carried out, the error is of order 1/ √ N, which is not at all competitive with good algorithms, such as the Romberg method. However, in higher dimensions, the Monte Carlo method can be quite attractive. For example, where (x j , y j , z j ) is a random sequence of N points in the unit cube 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, and 0 ≤ z ≤ 1. To obtain random points in the cube, we assume that we have a random sequence in (0,1) denoted by ξ 1 , ξ 2 , ξ 3 , ξ 4 , ξ 5 , ξ 6 , . . .To get our first random point p 1 in the cube, just let p 1 = (ξ 1 , ξ 2 , ξ 3 ). The second is, of course, p 2 = (ξ 4 , ξ 5 , ξ 6 ) and so on. If the interval (in a one-dimensional integral) is not of length 1, but say is the general case (a, b), then the average of f over N random points in (a, b) is not simply an approximation for the integral but rather for: which agrees with our intention that the function f (x) = 1 has an average of 1. Similarly, in higher dimensions, the average of f over a region is obtained by integrating and dividing by the area, volume, or measure of that region. For instance, is the average of f over the parallelepiped described by the following three inequalities: To keep the limits of integration straight, we recall that:  Monte Carlo techniques: In each case, the random points should be uniformly distributed in the regions involved.
In general, we have: Here we are using the fact that the average of a function on a set is equal to the integral of the function over the set divided by the measure of the set.

The probabilities of convergence and divergence
Let R E be the exact result of the random experiment or of a simple or a multidimensional integral that are not always possible to evaluate by ordinary methods of probability theory or calculus or deterministic numerical methods. And let R A be the approximate result of these experiments and integrals found by Monte Carlo methods. The relative error in the Monte Carlo methods is: In addition, the percent relative error is = 100% × |(R E − R A )/R E | and is always between 0% and 100%. Therefore, the relative error is always between 0 and 1. Hence: Moreover, we define the real probability by: And therefore: = Probability of Monte Carlo method divergence in the imaginary probability set M since it is the imaginary complement of P r . Consequently, The relative error in the Monte Carlo method = Probability of Monte Carlo method divergence in R since it is the real complement of P r .
In the case where 0 Therefore, if R A = 0 or R A = 2R E that means before the beginning of the simulation, then: And if R A = R E that means at the end of Monte Carlo simulation then:

The complex random vector Z in C
We have That means that the complex random vector Z is the sum in C of the real probability of convergence in R and of the imaginary probability of divergence in M.
If R A = 0 (before the simulation begins) then If R A = R E /2 or R A = 3R E /2 (at the middle of the simulation) then:

The degree of our knowledge DOK
We have: then solving the two second-degree equations for R A /R E gives: That means that DOK is minimum when the approximate result is equal to half of the exact result if 0 ≤ R A ≤ R E or when the approximate result is equal to three times the half of the exact result if R E ≤ R A ≤ 2R E , that means at the middle of the simulation.
In addition, if DOK = 1 then: and vice versa. That means that DOK is maximum when the approximate result is equal to 0 or 2R E (before the beginning of the simulation) and when it is equal to the exact result (at the end of the simulation). We can deduce that we have perfect and total knowledge of the stochastic experiment before the beginning of Monte Carlo simulation since no randomness was introduced yet, as well as at the end of the simulation after the convergence of the method to the exact result.

The chaotic factor Chf
We have: and vice versa. That means that Chf is minimum when the approximate result is equal to half of the exact result if 0 ≤ R A ≤ R E or when the approximate result is equal to three times the half of the exact result if R E ≤ R A ≤ 2R E , that means at the middle of the simulation.
In addition, if Chf = 0 then: That means that Chf is equal to 0 when the approximate result is equal to 0 or 2R E (before the beginning of the simulation) and when it is equal to the exact result (at the end of the simulation).

The magnitude of the chaotic factor MChf
We have: and vice versa.
That means that MChf is maximum when the approximate result is equal to half of the exact result if 0 ≤ R A ≤ R E or when the approximate result is equal to three times the half of the exact result if R E ≤ R A ≤ 2R E , that means at the middle of the simulation. This implies that the magnitude of the chaos (MChf ) introduced by the random variables used in Monte Carlo method is maximum at the halfway of the simulation.
In addition, if MChf = 0 then: That means that MChf is minimum and is equal to 0 when the approximate result is equal to 0 or 2R E (before the beginning of the simulation) and when it is equal to the exact result (at the end of the simulation). We can deduce that the magnitude of the chaos in the stochastic experiment is null before the beginning of Monte Carlo simulation since no randomness was introduced yet, as well as at the end of the simulation after the convergence of the method to the exact result when randomness has finished its task in the stochastic Monte Carlo method and experiment.

The probability Pc in the probability set C = R + M
We have: This is due to the fact in C we have subtracted in the equation above the chaotic factor Chf from our knowledge DOK and therefore we have eliminated chaos caused and introduced by all the random variables and the stochastic fluctuations that lead to approximate results in the Monte Carlo simulation in R. Therefore, since in C we have always R A = R E then the Monte Carlo simulation which is a stochastic method by nature in R becomes after applying the CPP a deterministic method in C since the probability of convergence of any random experiment in C is constantly and permanently equal to 1 for any iterations number N.

The rates of change of the probabilities in R, M, and C
Since Then: and R E > 0 that means that the slope of the probability of convergence in R or its rate of change is constant and positive if 0 ≤ R A ≤ R E , and constant and negative if R E ≤ R A ≤ 2R E , and it depends only on R E ; hence, we have a constant increase in P r (the convergence probability) as a function of the iterations number N as R A increases from 0 to R E and as R A decreases from 2R E to R E till P r reaches the value 1 that means till the random experiment converges to R E .
and R E > 0 that means that the slopes of the probabilities of divergence in R and M or their rates of change are constant and negative if 0 ≤ R A ≤ R E , and constant and positive if R E ≤ R A ≤ 2R E , and they depend only on R E ; hence, we have a constant decrease in P m /i and P m (the divergence probabilities) as functions of the iterations number N as R A increases from 0 to R E and as R A decreases from 2R E to R E till P m /i and P m reach the value 0 that means till the random experiment converges to R E . Additionally, ; that means that the module of the slope of the complex probability vector Z in C or of its rate of change is constant and positive and it depends only on R E ; hence, we have a constant increase in Re(Z) and a constant decrease in Im(Z) as functions of the iterations number N and as Z goes from (0, i) at N = 0 till (1,0) at the simulation end; hence, till Re(Z) = P r reaches the value 1 that means till the random experiment converges to R E . Furthermore, since Pc 2 = DOK − Chf = DOK + MChf = 1 then Pc = 1 = Probability of convergence in C and consequently: that means that Pc is constantly equal to 1 for every value of R A , of R E , and of the iterations number N, that means for any stochastic experiment and for any simulation of Monte Carlo method. So, we conclude that in C we have complete and perfect knowledge of the random experiment which has become now a deterministic one since the extension in the complex probability plane C defined by the CPP axioms has changed all stochastic variables to deterministic variables. A powerful tool will be described in the current section which was developed in my personal previous research papers and which is founded on the concept of a complex random vector that is a vector combining the real and the imaginary probabilities of a random outcome, defined in the three added axioms of CPP by the term z j = P rj + P m j . Accordingly, we will define the vector Z as the resultant complex random vector which is the sum of all the complex random vectors z j in the complex probability plane C. This procedure is illustrated by considering first a general Bernoulli distribution, then we will discuss a discrete probability distribution with N equiprobable random vectors as a general case. In fact, if z represents one output from the uniform distribution U, then Z U represents the whole system of outputs from the uniform distribution U that means the whole random distribution in the complex probability plane C. So, it follows directly that a Bernoulli distribution can be understood as a simplified system with two random outputs (section 6.1), whereas the general case is a random system with N random outputs (section 6.2). Afterward, I will prove the convergence of Monte Carlo methods using this new powerful concept (section 6.3).

The resultant complex random vector Z of a general Bernoulli distribution (A distribution with two random outputs)
First, let us consider the following general Bernoulli distribution and let us define its complex random vectors and their resultant (Table 1): Where, x 1 and x 2 are the outcomes of the first and second random vectors respectively. P r1 and P r2 are the real probabilities of x 1 and x 2 respectively. P m1 and P m2 are the imaginary probabilities of x 1 and x 2 respectively.
We have 2 j=1 P rj = P r1 + P r2 = p + q = 1 and 2 j=1 P mj = P m1 Where N is the number of random vectors or outcomes which is equal to 2 for a Bernoulli distribution. The complex random vector corresponding to the random outcome x 1 is: The complex random vector corresponding to the random outcome x 2 is: The resultant complex random vector is defined as follows: The probability Pc 1 in the complex plane C = R + M which corresponds to the complex random vector z 1 is computed as follows: This is coherent with the three novel complementary axioms defined for the CPP.
Similarly, Pc 2 corresponding to z 2 is: The probability Pc in the complex plane C which corresponds to the resultant complex random vector Z = 1 + i is computed as follows: Where s is an intermediary quantity used in our computation of Pc.
Pc is the probability corresponding to the resultant complex random vector Z in the probability universe C = R + M and is also equal to 1. Actually, Z represents both z 1 and z 2 that means the whole distribution of random vectors of the general Bernoulli distribution in the complex plane C and its probability Pc is computed in the same way as Pc 1 and Pc 2 .
By analogy, for the case of one random vector z j we have: In general, for the vector Z we have: Where the degree of our knowledge of the whole distribution is equal to DOK Z = |Z| 2 N 2 , its relative chaotic factor is Chf Z = Chf N 2 , and its relative magnitude of the chaotic factor is MChf Z = |Chf z |.
Notice, if N = 1 in the previous formula, then: which is coherent with the calculations already done.
To illustrate the concept of the resultant complex random vector Z, I will use the following graph ( Figure 15). Figure 15. The resultant complex random vector Z = z 1 + z 2 for a general Bernoulli distribution in the complex probability plane C.

The general case: a discrete distribution with N equiprobable random vectors (A uniform distribution U with N random outputs)
As a general case, let us consider then this discrete probability distribution with N equiprobable random vectors which is a discrete uniform probability distribution U with N outputs (Table 2): We have here in C = R + M: Moreover, we can notice that: |z 1 | = |z 2 | = · · · = |z N |, hence, Where s is an intermediary quantity used in our computation of Pc U .
Therefore, the degree of our knowledge corresponding to the resultant complex vector Z U representing the whole uniform distribution is: and its relative chaotic factor is: Similarly, its relative magnitude of the chaotic factor is: Thus, we can verify that we have always: What is important here is that we can notice the following fact. Take for example:
Outcome We can deduce mathematically using calculus that: From the above, we can also deduce this conclusion: As much as N increases, as much as the degree of our knowledge in R corresponding to the resultant complex vector is perfect and absolute, that means, it is equal to one, and as much as the chaotic factor that prevents us from foretelling exactly and totally the outcome of the stochastic phenomenon in R approaches zero. Mathematically we state that: If N tends to infinity then the degree of our knowledge in R tends to one and the chaotic factor tends to zero.

The convergence of Monte Carlo methods using Z and CPP
Subsequently, if lim N→+∞ Chf Z U = 0 then lim N→+∞ Chf MC = 0 (the chaotic factor of Monte Carlo methods) provided that: (1) The Monte Carlo algorithm used to solve the stochastic process or integral is correct (2) The integral that we want to solve using Monte Carlo methods is convergent Therefore: that means either the simulation has not started yet (P rob (convergence) = 0) or the Monte Carlo algorithm result or output has converged to the exact result (P rob (convergence) → 1) since Chf MC = 0 in only two places which are N = 0 and N → +∞.
(2) And lim that means either: • the simulation has not started yet (R A = 0 or R A = 2R E ) since at this instant the percent relative error is maximum and is equal to 100%, • or the Monte Carlo algorithm output has converged to the exact result (R A → R E ) since at this instant the percent relative error is minimum and is equal to 0%, this is due to the fact that Chf MC = 0 in only two places which are N = 0 and N → +∞.
Moreover, the speed of the convergence of Monte Carlo methods depends on: (1) The algorithm used (2) The integrand function of the original integral that we want to evaluate (f (x) or in general f (x 1 , x 2 , . . . , (3) The random numbers generator that provides the integrand function with random inputs for the Monte Carlo methods. In the current research work we have used one specific uniform random numbers generator although many others exist in literature.

Furthermore, for
(the DOK of Monte Carlo methods) and Chf This means that we have a random experiment with only one outcome or vector, hence, either P r = 1 (always converging) or P r = 0 (always diverging), that means we have respectively either a sure event or an impossible event in R. Consequently, we have surely the degree of our knowledge is equal to one (perfect experiment knowledge) and the chaotic factor is equal to zero (no chaos) since the experiment is either certain (that means we have used a deterministic algorithm so the stochastic Monte Carlo methods are replaced by deterministic methods that do not use random numbers like the classical and ordinary methods of numerical integration) or impossible (an incorrect or divergent algorithm or integral), which is absolutely logical. Consequently, we have proved here the law of large numbers (already discussed in the published paper (Abou Jaoude, 2015b)) as well as the convergence of  Monte Carlo methods using CPP. The following figures (Figures 16 and 17) show the convergence of Chf Z U to 0 and of DOK Z U to 1 as functions of the uniform samples number N (Number of inputs/outputs).

The Evaluation of the new paradigm parameters
We can deduce from what has been elaborated previously the following: The real convergence probability: We have 0 ≤ N ≤ N C where N = 0 corresponds to the instant before the beginning of the random experiment when R A (N = 0) = 0 or = 2R E , and N = N C (iterations number needed for the method convergence) corresponds to the instant at the end of the random experiments and Monte Carlo methods when The imaginary divergence probability: The real complementary divergence probability: The complex probability and random vector: The Degree of Our Knowledge: The Chaotic Factor: Chf (N) is null when P r (N) = P r (0) = 0 and when P r (N) = P r (N C ) = 1. The Magnitude of the Chaotic Factor MChf : MChf (N) is null when P r (N) = P r (0) = 0 and when P r (N) = P r (N C ) = 1. At any iteration number N: 0 ≤ ∀N ≤ N C , the probability expressed in the complex probability set C is the following: Hence, the prediction of the convergence probabilities of the stochastic Monte Carlo experiments in the set C is permanently certain.
Let us consider thereafter some stochastic experiments and some single and multidimensional integrals to simulate the Monte Carlo methods and to draw, to visualize, as well as to quantify all the CPP and prognostic parameters.

Flowchart of the complex probability and Monte Carlo methods prognostic model
The following flowchart summarizes all the procedures of the proposed complex probability prognostic model:

Simulation of the new paradigm
Note that all the numerical values found in the simulations of the new paradigm for any iteration cycles N were computed using the MATLAB version 2019 software. In addition, the reader should take care of the rounding errors since all numerical values are represented by at most five significant digits and since we are using Monte Carlo methods of integration and simulation which give approximate results subject to random effects and fluctuations.

The first simple integral: a linear function
Let us consider the integral of the following linear function: by the deterministic methods of calculus.
with 1 ≤ N ≤ N C after applying Monte Carlo method. Moreover, the four figures (Figures 18-21) show the increasing convergence of Monte Carlo method and simulation to the exact result R E = 0.5 for N = 50, 100, 500,   and N = N C = 100, 000 iterations. Therefore, we have: which is equal to the convergence probability of Monte Carlo method as N → +∞. Additionally, Figure 22 illustrates clearly and visibly the relation of Monte Carlo method to the complex probability paradigm with all its parameters (Chf , R A , P r , MChf , R E , DOK, P m /i, Pc) after applying it to this linear function.

The second simple integral: a cubic function
Let us consider the integral of the following cubic function: by the deterministic methods of calculus.
with 1 ≤ N ≤ N C after applying Monte Carlo method. Moreover, the four figures (Figures 23-26) show the increasing convergence of Monte Carlo method and simulation to the exact result R E = 0.25 for N = 50, 100, 500, and N = N C = 100, 000 iterations. Therefore, we have: which is equal to the convergence probability of Monte Carlo method as N → +∞. Additionally, Figure 27 illustrates clearly and visibly the relation of Monte Carlo method to the complex probability paradigm with all its parameters (Chf , R A , P r , MChf , R E , DOK, P m /i, Pc) after applying it to this cubic function.

The third simple integral: an increasing exponential function
Let us consider the integral of the following increasing exponential function:  by the deterministic methods of calculus.
with 1 ≤ N ≤ N C after applying Monte Carlo method. Moreover, the four figures (Figures 28-31) show the increasing convergence of Monte Carlo method and simulation to the exact result R E = 1.718281828 . . . for N = 50, 100, 500, and N = N C = 100, 000 iterations. Therefore, we have: which is equal to the convergence probability of Monte Carlo method as N → +∞. Additionally, Figure 32 illustrates clearly and visibly the relation of Monte Carlo method to the complex probability paradigm with all its parameters (Chf , R A , P r , MChf , R E , DOK, P m /i, Pc) after applying it to this increasing exponential function.

The fourth simple integral: a logarithmic function
Let us consider the integral of the following logarithmic function: by the deterministic methods of calculus.
Ln(x j ) = R A with 1 ≤ N ≤ N C after applying Monte Carlo method. Moreover, the four figures (Figures 33-36) show the increasing convergence of Monte Carlo method and simulation to the exact result R E = 0.386294361 . . . for N = 50, 100, 500, and N = N C = 100, 000 iterations. Therefore, we have: which is equal to the convergence probability of Monte Carlo method as N → +∞. Additionally, Figure 37 illustrates clearly and visibly the relation of Monte Carlo method to the complex probability paradigm with all its parameters (Chf , R A , P r , MChf , R E , DOK, P m /i, Pc) after applying it to this logarithmic function.

A multiple integral
Let us consider the multidimensional integral of the following function:   by the deterministic methods of calculus.
x j y j z j = R A with 1 ≤ N ≤ N C after applying Monte Carlo method.  for N = 50, 100, 500, and N = N C = 100, 000 iterations. Therefore, we have: which is equal to the convergence probability of Monte Carlo method as N → +∞. Additionally, Figure 42 illustrates clearly and visibly the relation of Monte Carlo method to the complex probability paradigm with all its parameters (Chf , R A , P r , MChf , R E , DOK, P m /i, Pc) after applying it to this threedimensional integral.

The first random experiment: a random walk in a plane
We will try in this problem to simulate random walks in a plane, each walk starting at O(0,0) and consisting of s = 10000 steps of length = L = 0.008. The probability theory says that after s steps, the expected distance from the starting point will be L × √ s. So, the estimated distance in the programme will be = 0.008 × √ 10000 = 0.008 × 100 = 0.8 = R E . The figure below shows a random walk in a plane ( Figure 43): Moreover, the four figures (Figures 44-47) show the increasing convergence of Monte Carlo method and simulation to the exact result R E = 0.8 for N = 50, 100, 500,     and N = N C = 100, 000 iterations. Therefore, we have: which is equal to the convergence probability of Monte Carlo method as N → +∞. Additionally, Figure 48 illustrates clearly and visibly the relation of Monte Carlo method to the complex probability paradigm with all its parameters (Chf , R A , P r , MChf , R E , DOK, P m /i, Pc) after applying it to this random walk problem.   Pc(N) on the plane N = 0 iterations. This line starts at the point (P r = 0, P m /i = 1) and ends at the point (P r = 1, P m /i = 0). The red curve represents P r (N) in the plane P r (N) = P m (N)/i. This curve starts at the point J (P r = 0, P m /i = 1, N = 0 iterations), reaches the point K (P r = 0.5, P m /i = 0.5, N = 50,000 iterations), and gets at the end to L (P r = 1, P m /i = 0, N = N C = 100,000 iterations). The blue curve represents P m (N)/i in the plane P r (N) + P m (N)/i = 1. Notice the importance of the point K which is the intersection of the red and blue curves at N = 50,000 iterations and when P r (N) = P m (N)/i = 0.5. The three points J, K, L are the same as in Figure 48.
In the third cube ( Figure 51), we can notice the simulation of the complex random vector Z(N) in C as a function of the real convergence probability P r (N) = Re(Z) in R and of its complementary imaginary divergence probability P m (N) = i × Im(Z) in M, and this in terms of the iterations N for the random walk problem. The red curve represents P r (N) in the plane P m (N) = 0 and the blue curve represents P m (N) in the plane P r (N) = 0. The green curve represents the complex probability vector Z(N) = P r (N) + P m (N) = Re(Z) + i × Im(Z) in the plane P r (N) = iP m (N) + 1. The curve of Z(N) starts at the point J (P r = 0, P m = i, N = 0 iterations) and ends at the point L (P r = 1, P m = 0, N = N C = 100,000 iterations). The line in cyan is P r (0) = iP m (0) + 1 and it is the projection of the Z(N) curve on the complex probability plane whose equation is N = 0 iterations. This projected line starts at the point J (P r = 0, P m = i, N = 0 iterations) and ends at the point (P r = 1, P m = 0, N = 0 iterations). Notice the importance of the point K corresponding to N = 50,000 iterations and when P r = 0.5 and P m = 0.5i. The three points J, K, L are the same as in Figure 48.

The second random experiment: the birthday problem
The given of the second random experiment is the following: Find the probability that n people (n ≤ 365) selected at random will have n different birthdays.

Theoretical Analysis
We assume that there are only 365 days in a year (not a leap year) and that all birthdays are equally probable, assumptions which are not quite met in reality.
The first of the n people has of course some birthday with probability 365/365 = 1. Then, if the second is to have a different birthday, it must occur on one of the other days. Therefore, the probability that the second person has a birthday different from the first is 364/365. Similarly, the probability that the third person has a birthday different from the first two is 363/365. Finally, the probability that the nth person has a birthday different from the others is (365 − n + 1)/365. We therefore have: The table below gives the theoretical probabilities of different birthdays for a selected number of people n (Table 3). Moreover, the four figures (Figures 52-55) show the increasing convergence of Monte Carlo method and simulation to the exact result R E = 0.80558972 . . . for n = 13 people and for N = 50, 100, 500, and N = N C = 500, 000, 000 iterations. Therefore, we have: which is equal to the convergence probability of Monte Carlo method as N → +∞. Additionally, Figure 56 illustrates clearly and visibly the relation of Monte Carlo method to the complex probability paradigm with all its parameters (Chf , R A , P r , MChf , R E , DOK, P m /i, Pc) after applying it to this birthday problem.  to (DOK = 1, Chf = 0, N = N C = 500,000,000 iterations). The three points J, K, L are the same as in Figure 56.
In the second cube ( Figure 58), we can notice the simulation of the convergence probability P r (N) and its complementary real divergence probability P m (N)/i in terms of the iterations N for the birthday problem. The line in cyan is the projection of Pc 2 (N) = P r (N) + P m (N)/i = 1 = Pc(N) on the plane N = 0 iterations. This line starts at the point (P r = 0, P m /i = 1) and ends at the point (P r = 1, P m /i = 0). The red curve represents P r (N) in the plane P r (N) = P m (N)/i. This curve starts at the point J (P r = 0, P m /i = 1, N = 0 iterations), reaches the point K (P r = 0.5, P m /i = 0.5, N = 250,000,000 iterations), and gets at the end to L (P r = 1, P m /i = 0, N = N C = 500,000,000 iterations). The blue curve represents P m (N)/i in the plane P r (N) + P m (N)/i = 1. Notice the importance of the point K which is the intersection of the red and blue curves at N = 250,000,000 iterations and when P r (N) = P m (N)/i = 0.5. The three points J, K, L are the same as in Figure 56.
In the third cube (Figure 59), we can notice the simulation of the complex random vector Z(N) in C as a function of the real convergence probability P r (N) = Re(Z) in R and of its complementary imaginary divergence probability P m (N) = i × Im(Z) in M, and this in terms    of the iterations N for the birthday problem. The red curve represents P r (N) in the plane P m (N) = 0 and the blue curve represents P m (N) in the plane P r (N) = 0. The green curve represents the complex probability vector Z(N) = P r (N) + P m (N) = Re(Z) + i × Im(Z) in the plane P r (N) = iP m (N) + 1. The curve of Z(N) starts at the point J (P r = 0, P m = i, N = 0 iterations) and ends at the point L (P r = 1, P m = 0, N = N C = 500,000,000 iterations). The line in cyan is P r (0) = iP m (0) + 1 and it is the projection of the Z(N) curve on the complex probability plane whose equation is N = 0 iterations. This projected line starts at the point J (P r = 0, P m = i, N = 0 iterations) and ends at the point (P r = 1, P m = 0, N = 0 iterations). Notice the importance of the point K corresponding to N = 250,000,000 iterations and when P r = 0.5 and P m = 0.5i. The three points J, K, L are the same as in Figure 56.

The third random experiment: the two dice problem
The following programme has an analytic solution beside a simulated solution. This is advantageous for us because we wish to compare the results of Monte Carlo simulations with theoretical solutions. Consider the experiment of tossing two dice. For an unloaded die, the numbers 1,2,3,4,5, and 6 are equally likely to occur. We ask: What is the probability of throwing a 12 (i.e. 6 appearing on each die) in 14 throws of the dice?
There are six possible outcomes from each die for a total of 36 possible combinations. Only one of these combinations is a double 6, so 35 out of the 36 combinations are not correct. With 14 throws, we have (35/36) 14 as the probability of a wrong outcome. Hence, 1 − (35/36) 14 = 0.325910425 . . . is the exact answer and therefore the value of R E . Not all random problems of this type can be analyzed like this.
Moreover, the four figures (Figures 60-63) show the increasing convergence of Monte Carlo method and simulation to the exact result R E = 0.325910425 . . . for N = 50, 100, 500, and N = N C = 100, 000, 000 iterations. Therefore, we have: which is equal to the convergence probability of Monte Carlo method as N → +∞. Additionally, Figure 64 illustrates clearly and visibly the relation of Monte Carlo method to the complex probability paradigm with all its parameters (Chf , R A , P r , MChf , R E , DOK, P m /i, Pc) after applying it to this two dice problem.  iterations, and returns at the end to J (DOK = 1, Chf = 0) when N = N C = 100,000,000 iterations. The other curves are the graphs of DOK(N) (red) and Chf (N) (green, blue, pink) in different planes. Notice that they all have a minimum at the point K (DOK = 0.5, Chf = −0.5, N = 70,000,000 iterations). The point L corresponds to (DOK = 1, Chf = 0, N = N C = 100,000,000 iterations). The three points J, K, L are the same as in Figure 64.
In the second cube (Figure 66), we can notice the simulation of the convergence probability P r (N) and its complementary real divergence probability P m (N)/i in terms of the iterations N for the two dice problem. The line in cyan is the projection of Pc 2 (N) = P r (N) + P m (N)/i = 1 = Pc(N) on the plane N = 0 iterations. This line starts at the point (P r = 0, P m /i = 1) and ends at the point (P r = 1, P m /i = 0). The red curve represents P r (N) in the plane P r (N) = P m (N)/i. This curve starts at the point J (P r = 0,    P m /i = 1, N = 0 iterations), reaches the point K (P r = 0.5, P m /i = 0.5, N = 70,000,000 iterations), and gets at the end to L (P r = 1, P m /i = 0, N = N C = 100,000,000 iterations). The blue curve represents P m (N)/i in the plane P r (N) + P m (N)/i = 1. Notice the importance of the point K which is the intersection of the red and blue curves at N = 70,000,000 iterations and when P r (N) = P m (N)/i = 0.5. The three points J, K, L are the same as in Figure 64.
In the third cube (Figure 67), we can notice the simulation of the complex random vector Z(N) in C as a function of the real convergence probability P r (N) = Re(Z) in R and of its complementary imaginary divergence probability P m (N) = i × Im(Z) in M, and this in terms of the iterations N for the two dice problem. The red curve represents P r (N) in the plane P m (N) = 0 and the blue curve represents P m (N) in the plane P r (N) = 0. The green curve represents the complex probability vector Z(N) = P r (N) + P m (N) = Re(Z) + i × Im(Z) in the plane P r (N) = iP m (N) + 1. The curve of Z(N) starts at the point J (P r = 0, P m = i, N = 0 iterations) and ends at the point L (P r = 1, P m = 0, N = N C = 100,000,000 iterations). The line in cyan is P r (0) = iP m (0) + 1 and it is the projection of the Z(N) curve on the complex probability plane whose equation is N = 0 iterations. This projected line starts at the point J (P r = 0, P m = i, N = 0 iterations) and ends at the point (P r = 1, P m = 0, N = 0 iterations). Notice the importance of the point K corresponding to N = 70,000,000 iterations and when P r = 0.5 and P m = 0.5i. The three points J, K, L are the same as in Figure 64. At the end of all the simulations, it is crucial to mention here that all the previous examples (9.1.1 till 9.1.5 and 9.2.1 till 9.2.2) are illustrations of a linear convergence of the approximate result R A to the exact result R E ; therefore, the CPP parameters (P r , MChf, DOK, P m /i) meet at the middle of the simulation which is (N C /2, 0.5) and the parameter Chf is minimal at (N C /2, −0.5) since R A converges linearly to R E (Figures 22,27,32,37,42,48,56). Actually, in these simulations, N = N C /2corresponds to R A = R E /2. But in the last example and simulation (9.2.3: The two dice problem) we have the case of a nonlinear convergence of the approximate result R A to the exact result R E (Figures 60, 61, 62, 63); therefore, the CPP parameters (P r , MChf, DOK, P m /i) do not meet at the middle of the simulation which is N C /2 but at (N = 70, 000, 000 iterations, 0.5) and the parameter Chf is minimal at (N = 70, 000, 000 iterations, −0.5) since R A does not converge linearly to R E but it follows a nonlinear curve ( Figure 64). Actually, in this simulation, N = 70, 000, 000 iterationscorresponds to R A = R E /2. These facts are the direct consequence of the solution of the stochastic problem in question and of the algorithm used in the simulation.

Conclusion and perspectives
In the present research work the novel extended Kolmogorov paradigm of eight axioms (EKA) was applied and bonded to the classical and stochastic Monte Carlo numerical methods. Hence, a tight link between Monte Carlo methods and the original paradigm was made. Therefore, the model of 'Complex Probability' was more elaborated beyond the scope of my previous twelve research works on this subject.
Additionally, as it was verified and shown in the novel model, when N = 0 (before the beginning of the random simulation) and when N = N C (when Monte Carlo method converges to the exact result) therefore the degree of our knowledge (DOK) is one and the chaotic factor (Chf and MChf ) is zero since the random effects and fluctuations have either not started or they have finished their task on the experiment. During the course of the stochastic experiment (0 < N < N C ) we have: 0.5 ≤ DOK < 1, −0.5 ≤ Chf < 0, and 0 < MChf ≤ 0.5. Notice that during this whole process we have always Pc 2 = DOK -Chf = DOK + MChf = 1 = Pc, that means that the simulation which looked to be stochastic and random in the set R is now certain and deterministic in the set C = R + M, and this after the addition of the contributions of M to the phenomenon occurring in R and thus after subtracting and eliminating the chaotic factor from the degree of our knowledge. Moreover, the convergence and divergence probabilities of the stochastic Monte Carlo method corresponding to each iteration cycle N have been evaluated in the probability sets R, M, and C by P r , P m , and Pc respectively. Consequently, at each instance of N, the new Monte Carlo method and CPP parameters R E ,R A , P r , P m , P m /i, DOK, Chf, MChf, Pc, and Z are certainly and perfectly predicted in the complex probability set C with Pc maintained as equal to one constantly and permanently. In addition, using all these illustrated simulations and drawn graphs all over the whole research work, we can quantify and visualize both the certain knowledge (expressed by DOK and Pc) and the system chaos and random effects (expressed by Chf and MChf ) of Monte Carlo methods. This is definitely very fascinating, fruitful, and wonderful and proves once again the advantages of extending the five probability axioms of Kolmogorov and thus the novelty and benefits of this original field in prognostic and applied mathematics that can be called verily: 'The Complex Probability Paradigm'.
Furthermore, it is important to indicate here that one very well-known and essential probability distribution was considered in the present paper which is the discrete uniform probability distribution as well as a specific uniform random numbers generator, knowing that the novel CPP model can be applied to any uniform random numbers' generator existent in literature. This will lead certainly to analogous conclusions and results and will show undoubtedly the success of my original theory.
Moreover, it is also significant to mention that it is possible to compare the current conclusions and results with the existing ones from both theoretical investigations and analysis and simulation researches and studies. This will be the task of subsequent research papers.
As a prospective and future work and challenges, it is planned to more elaborate the original created prognostic paradigm and to implement it to a varied set of nondeterministic systems like for other random experiments in classical probability theory and in stochastic processes. Furthermore, we will apply also CPP to the field of prognostic in engineering using the first order reliability method (FORM) as well as to the random walk problems which have enormous applications in physics, in economics, in chemistry, in applied and pure mathematics.

Disclosure statement
No potential conflict of interest was reported by the author.