How technology influences information gathering and information spreading

Abstract Until a few decades ago, the world of information exchanges consisted mainly of interpersonal and mass communications. With the development of the global network, the Internet of Things, etc., new types of information flows have been established. Today they include the collection of huge amounts of data and their analysis with artificial intelligence techniques. Moreover, the global network is being populated by new communicating subjects equipped with artificial intelligence. These phenomena imply unprecedented problems for individuals and society, which postulate urgent regulatory interventions. In this scenario, the aim of this paper is to give a simplified but accurate idea of what is behind technological developments.


Introduction
Among the characteristics that distinguish man from all other living beings, emerges his unique ability to communicate and to build, i.e. to use languages and techniques.
All living beings send messages through which they in-form, that is, modify the cognitive and emotional state of other living beings, using various languages, depending on their species. What clearly differentiates man is the recursive nature of his languages, in other words the ability to formulate sentences by concatenating terms according to certain rules. This characteristic, which allows the formulation of messages of potentially unlimited expressive richness, is a consequence of the recursive property of human reason itself. 1 In fact, man is capable of constructively deriving one concept from another. He knows how to calculate. This skill is very different from the ability to distinguish the multiplicity of objects, which other animals also possess.
As far as the ability to build is concerned, other animals also possess technical skills. The most evolved ones know how to build rudimentary tools, guided by an inherited instinct and by experience. But recursive ability gives man the unique capability to build new tools of increasing complexity from simpler tools. The lathe from the wheel, the pincer from the lever, and so on. In the modern era, thanks to the use of scientific knowledge and to the adoption of the scientific method, technique has become technology. The art of building has become engineering.
Technique and technology have played a great role in information exchanges, with the use of graphic signs, acoustic and optical signals, etc. But two milestones have marked real turning points in the history of human communications, taking into account historical and social contexts: the invention of printing and the development of modern tele-communications.
If the invention of printing was based on the use of movable type, telecommunications are based on the use of electromagnetic signals, i.e. physical perturbations traveling, as waves, at the speed of light. This decisive leap forward in communications was one of the most tangible consequences of a deeper understanding of electrical phenomena, an achievement that divided human history into two distinct eras: pre-electrical and electrical.
Only two centuries have passed since the introduction of the telegraph, which nullified distance in information exchanges. To this was added the ability to transform sounds and images into signals, and vice-versa. These transduction techniques, and the use of electromagnetic signals permitted the realization of a virtual tele-presence all over the earth. In fact, the maximum travel-time of electromagnetic signals on our planet is comparable to the natural reaction-time of human senses.
This was the situation until, in recent decades, the so-called digital revolution came along. It was a rapid revolution, yet of such a gradualness that only older people can fully appreciate its real dimensions.
Digital technologies imply the representation of any kind of message by means of strings of numerical symbols (digits). This makes transmission and storage of information efficient and flexible. But, more relevantly, this allows the possibility of manipulating messages with computing machines, emulating logical-mathematical recursive calculus. This allowed implementing mechanical reasoning techniques. We could say that, unlike all other technologies, digital technologies operate not only on the res extensa, i.e. on physical reality, but also on the res cogitans, the sphere of thought.
The fact that digital technologies have proved so prompt and efficient in dealing with the emergency of the pandemic, with enhanced teleconferencing services, extension of telematic services, telemedicine, etc., demonstrates the solidity of a very wellfounded technological enterprise.
This enterprise has seen, especially in the last decade, a rapid increase of the use of artificial intelligence (AI) techniques. These techniques generated two rapidly evolving phenomena: the application of mechanical reasoning processes to very large amounts of data, the so-called big data and, on the other hand, the diffusion of non-human communicating subjects. As a consequence, the telecommunication network has become the theater of very complex dynamics that go far beyond the traditional patterns of interpersonal or mass communication.
The aim of this paper is to help the reader to grasp a realistic and panoramic idea of the role that such technologies play and will probably play in the next future. For this, it is not to the purpose to enter into technical details, also because of the very abundant use of acronyms which are difficult to keep in mind. The principal acronym we need to know is 'ICT', which means Information and Communication Technology. Nor is it worth dwelling on quantitative data, which may become obsolete just after a few months.
So, let's start by taking an essential look at the comprehensive world of ICT.

The world of ICT
Today's ICT industry rests on three major pillars: Network, Databases, and AI. Networks are the result of a technological enterprise that began about two centuries ago. Among the most significant steps of this enterprise, let us mention the deployment of the first cable for the transatlantic telegraph, which dates to 1858. Another, much better known, step is the introduction of wireless radio-communications which took place at the end of the same century. Rapidly, networks for information diffusion (broadcasting) and telephone networks and data communication networks (point-topoint information transfer) were established. The evolution of the latter networks has finally led to the so-called global network.
If we think of information as goods, we observe that the global network has an organization analogous to that of the commercial transport network. Just as the latter is based on physical infrastructures of different kinds (roads, railroads, air and sea lines, etc.), so infrastructures of the global network are made with radio stations, terrestrial and submarine cables, satellite systems, etc.
Analogous to goods transport links, the so-called telecommunication channels convey information using infrastructures. 2 Channels are characterized by their capacity (often declined as bandwidth in technical jargon), that is the number of symbols transported in one second, and by their latency time, that is the delay time between the sending of messages and their reception.
The global network is organized differently from the traditional telephone networks, where a permanent connection is established between two interlocutors through a series of cascaded channels, and the vocal message flows continuously during a conversation. On the global network, messages are fragmented into packets marked with an ordering number and the recipient's address. These packets are forwarded in the network on different paths and finally reassembled and delivered to the recipient. Transport paths through the network are selected for optimizing traffic flows.
Just as some specialized networks are organized for specific goods transportation, so in the global network specialized networks are configured for different types of information transfer (data, multimedia, remote controls, etc.).
Faced with the rapidly growing traffic demand, the constantly pressing technical problem is upgrading the infrastructure for increasing the number of available channels and their capacity, and for minimizing their latency. This involves a continuous medium-term and long-term planning activity for the deployment of new cables and radio stations, and for the improvement of transmission technologies. This activity is coordinated through international standardization agencies such as the International Telecommunication Union (ITU), founded in 1865 and now part of the United Nations. This standardization is strategic. Thanks to widely shared rules and protocols, the network is accessible in the same way at every network access point, all over the world. 3 As far as databases are concerned, early on they consisted of electronic archives set up for quick access to data stored according to certain classification criteria. The quick progress of microelectronic technologies provided exponentially growing memory capacity, that permitted the creation of so-called big data. Big data are no longer structurally organized archives, but rather gigantic data stores. Here, the ordering of data is mostly done a posteriori, observing correlations and affinities among data themselves. One main source of big data consists of collections of features of communication events in the network. Another significant source is the collection of data generated by a myriad of automated devices deployed for tele-measurement and monitoring, telecontrol of systems and devices, surveillance, localization of objects and vehicles etc. This is known as the Internet of Things.
Finally, AI is a set of techniques aimed at processing big data, enabling manmachine communications, performing network traffic management, etc. But let's leave AI for now because it requires a larger discussion.

The role of information sciences
To support telecommunication technological developments with adequate rational tools, in the middle of the past century, a new scientific discipline, the information theory, was found. It required the quantitative definition of information and the formulation of some basic laws.
To describe its essential traits, let us refer to an abstract information source emitting messages, coded as sequences of digits. Here, information is viewed as the elimination of the uncertainty about possible messages provided by the source. The unit of measure of information, called a bit, is defined as the information provided by a single emission of one of two equally possible digits. The average amount of information provided by a source during the emission of messages is called the entropy of the source. It is maximum when the emitted digits are equiprobable. Hence comes the idea that the entropy of a source would measure its inner disorder.
This theory (conceived by Claude E. Shannon) was so fruitful that this concept of information has become paradigmatic for other disciplines, including physics, and even philosophy. What never ceases to interest philosophers is that an immaterial entity such as information is susceptible of quantification, as if it were a physical entity.
But, viewed otherwise, information, as well as the entropy, are physical in actu. In fact, messages are always translated into signals, i.e. physical perturbations. Therefore, emitting information always involves energy consumption, as you note when you must recharge your smartphone.
The habit of conceiving information as an immaterial entity comes from thinking of it as invariant under a change of the physical nature of signals. This is the case, for example, of a text printed on a sheet of paper, and the sequence of electrical states of a memory generating it through the printer. We are led to assign the same amount of information to both the printed text and to the electrical states. Thus, information is realized in signals in the same way as an abstract geometrical figure is realized in the drawing of an architectural work. Information is assimilated to Platonic ideas. 4 So, information theory, which was born as a conceptual tool for technological purposes, has become a key for the interpretation of reality (Floridi 2011).
The profound reason for this great interest in information theory lies in the fact that its theorems, i.e. the logical derivations drawn from its axioms reveal extremely interesting (often not intuitive) truths that can be usefully interpreted in different contexts, such as cognitive science and neuroscience, social communications, and genomics. This fact is analogous to what happens in geometry, where the Pythagorean theorem, logically derived from the axioms of Euclidean theory, reveals interesting truths to the geometer. Something similar also happened in physics. For example, the observation that the orbits of the planets are plane sections of cones suggested to Newton the law of universal gravitation.
The second conceptual support of information science and technology, certainly no less important, is the theory of computation, which can be seen as a kind of dynamic theory of information. The theory of computation, which originated from the study of the foundations of mathematics in the first decades of the past century, constitutes the conceptual basis of the disciplines of electronic computing (that we call computer science).
From the theory of computation arose the algorithmic theory of information. Whereas Shannon's information measures the number of bits needed to individuate a message among all possible ones, algorithmic information, defined by Andrej Kolmogorov, measures the number of digits needed to generate the message itself. This is identified with the size of the simplest (most 'elegant') program that generates the message at the output of a conventional computing machine. 5 The shorter the program, the less information carried by the message, irrespective of its length. Thus, algorithmic information can be considered as the mathematical formalization of the principle of maximum simplicity (Ockam's razor). The interest of this measure of information is that it can be regarded as a reasonable measure of the complexity of a system (more precisely of a message describing the system). But the most intriguing result of this theory is that such information cannot be always computed, due to a problem of logical incompleteness analogous to the celebrated G€ odel's incompleteness theorem. The consequence is that the calculus of the algorithmic information is an unattainable goal in general, even if it constitutes a formidable conceptual guide (Vit anyi 2020). However, it can be approximated in some technical applications. 6 The algorithmic information theory too is receiving increasing interest from other disciplines, including philosophy. So-called digital philosophy looks at the universe as an immense computer that calculates its own evolution (Wolfram 2002, Chap. 9;Fredkin 2003). Here, information is conceived as the fundament of reality, and computation as the law of becoming.
In the realm of communications, information theories play fundamental roles. Among others, let us cite methodologies for ensuring the integrity of messages by protecting information against accidental modifications, and their confidentiality by protecting information against fraudulent interception. Protection against accidental modification (typically caused by noise superposed to signals) is done with additional digits, known as redundancy. One of the greatest achievements of Shannon's theory is to have shown that, in principle, it is possible to reduce the probability of transmission message errors as much as one wants by adding only a percentage of extra digits, dependent on the noise strength. Other scholars have subsequently provided mathematical methods to define in practice appropriate encoding rules. As far as confidentiality is concerned, to prevent third parties from reading private messages the so-called encryption is done by transforming messages through reversible rules identified by keys. Encryption is one of the most difficult challenges that information science is facing today, especially in the perspective of the advent of powerful quantum computers which could be used to test possible keys to decipher encrypted messages.

The nature of information technologies
One of the side-effects of the pandemic was to turn a spotlight on the common perception about the role of science and technologies. In the occasion, institutional media have shown a tendency to use authoritative tones regarding the science, exposing themselves to tenacious opposition from some sectors traditionally suspicious of the scientific world. It has not helped that some protagonists of medicine, biology, epidemiology, etc. have expressed different views about the dynamics of the pandemic phenomenon, with divergent forecasts, showing in a resounding way the predictive limits of the involved scientific disciplines. This is so much so that many science popularizers have had to juggle between two opposite arguments, affirming, on the one hand, the high reliability of the scientific method but claiming, on the other hand, that scientific progress is, of course, made up of trials and errors.
This revealed a current tendency to confound science and technology, and the epistemic nature of different scientific disciplines.
Let us underline that the purpose of science is to make discoveries, that is, to reveal or explain existing things and phenomena. The purpose of technology is to produce things that do not exist, and often cannot exist in nature without human intervention (Schummer 2001). For example, the wheel with its axis … . The product of technologies is often the result of an invention, a creative act of the human intellect, just like the ideation of an artistic object. As the value of the latter is measured by its 'beauty', the value of invented objects is measured by their 'usefulness'. In fact, technological products are patentable.
Nevertheless, it is evident that the progress of scientific knowledge opens new spaces to technological inventions. For instance, without quantum physics, it would have been impossible to invent semiconductors and microelectronic devices.
It is important to take note that not all scientific disciplines have the same predictive power. For instance, the predictive power of physics and chemistry is not comparable with that of medicine, epidemiology, psychology, or with that of economics, history, social sciences, etc. In general, we may say that nomothetic sciences, which are based on well-verified laws and theories, possess a much higher predictive power than do empirical sciences. So, in the wide range of disciplines that adopt the scientific method, we pass from widely shared scientific 'certainties' to hypotheses that are more susceptible to falsification.
Likewise, technologies that are not sustained by nomothetic sciences are more exposed to the risk of failures and eventually to a lack of competitiveness. Where technology is supported by nomothetic sciences attempts are well targeted, and the risks of errors are minimized. To make just two historical examples, the first landing on the moon and subsequent departure from it in the Apollo project were not preceded by prior attempts, because the attempt was based on the solidity of the Newton's theory. Similarly, the discovery of the structure of DNA passed through few experiments, prompted by the optical diffraction theory.
In comparison, today in some fields, technological advancements are pragmatically based on a lot of low-cost attempts. For instance, some spacecraft projects are conducted today with many controlled trials; the needed adjustments are learned through failures. This is not the case with information technologies, in general. In fact, these technologies are not only based on information theories but are also grounded on wellassessed disciplines such as electromagnetism, optics, and semiconductor physics. It is because of the solidity of these bases that information technologies are characterized by their prompt, enduring progress. To the point that the technological progress in this sector may appear limitless 7 to consumers.
The great predictive power of these disciplines, along with the systematic standardization activity, allows the telecommunications industry to proceed and plan developments and related investments well in advance with respect to other technological sectors.
In this scenario, a notable novelty occurred in recent years: the irruption in communications of AI technologies. These constitute the gateway through which, as we said at the beginning, technology gets into the contents of messages, into the complexity of their meanings. It constitutes a real turning point. But before discussing this aspect, let us examine some fundamental issues without which the role of AI could appear, to some extent, esoteric.

The nature of artificial intelligence
Starting from the first industrial revolution, people began to use devices to automate some repetitive operations in production processes. At the end of the nineteenth century, this also happened in telecommunications, when in telephone exchanges, automatic switchboard systems were introduced to replace human operators.
However, the suggestive idea of mechanisms emulating human intellectual functions is not recent. It goes back to Descartes. In the fifth book of 'Discourse on Method', discussing the differences between humans and animals and the nature of reason, Descartes argued that it is impossible to compare the intelligence of a hypothetical machine to human intelligence. Leibniz, on the other hand, was firmly convinced that some aspects of human thought are mechanizable. This theme was re-proposed in a somewhat provocative way by mathematician Alan Turing in 1950, with his famous intelligence test: 'If, exchanging messages with a machine, you could not decide, from its answers, whether it is a machine rather than a hidden human, then you would have to agree that this machine is intelligent'. The debate is still alive. Soon after, the progress of information theory, neuroscience, and cybernetics, 8 together with the construction of the first computers, stimulated the birth of the new discipline called AI. The term was coined in 1956 in an academic environment, as part of an ambitious and optimistic project aimed at the realization of machines capable of replacing humans in any kind of intellectual work. This theme was interpreted masterfully in the film '2001: A Space Odyssey', which depicted the dramatic confrontation of man with his technology. However, the film did not go so far as to foresee the appearance of what was imminent, namely microcomputers, which would have been a real milestone for the spread of intelligent devices in technical applications.
AI cannot be considered a scientific discovery, but rather a collection of techniques invented to emulate some functions of natural intelligence in different ways. It is worth noting that in practice automated devices are considered intelligent when they appear 'smart' because of their surprising performance. But, after this initial impact, the same devices are usually downgraded to the rank of 'automatic' devices.
To clarify the whole matter, it is much better to look at the problem from a methodological viewpoint. Conceptually, AI techniques are inspired to two distinct approaches (Gillies and Giorello 2010, Chap. 3).
To mechanize logic (rationalist approach) To reproduce learning mechanisms (empiricist approach) In an early historical phase, the rationalist approach was preferred. Starting from the description of the problem to be solved and of the laws that govern the phenomena in its context, one implements deductive logic to solve problems. For this to work, however, it is necessary to have a complete formal representation of the environment under consideration. In practice, this is possible in restricted, specialized areas, such as industrial robotics, games, simulation programs, and automatic demonstration of theorems in mathematics.
In the last decade, the availability of new computing devices opened the door to the implementation of the empiricist approach. The rigidity of the rationalist approach was abandoned for the more flexible approach of statistical regularity, looking at what happens not always, but mostly, adhering to the Humean idea about human knowledge: Induction, even if it cannot be justified, is essential to human nature.
To further simplify the discourse, let us focus our attention on the paradigmatic problem of automatic recognition. For instance, the recognition of an object or an event in a video sequence. Most problems of practical interest such as prediction, classification, and relationship discovery, come down to a recognition problem.
Let us start from a set of examples for which the solution is known. Consider for instance video sequences in which it is already known that a type of object is present or absent. Now, with the help of statistics, we look for a rule for deciding automatically, from some measurable attributes, the appearance of that type of object. The rule is searched for through a systematic optimization procedure examining the available examples (training). If the identified rule works satisfactorily for other sets of examples (verification), you can decide to apply it to new cases (generalization). This is the essence of machine learning.
Of course, this realizes the classical inductive scheme of knowledge: it generalizes what it was observed in some cases to a new case. It resembles somehow the knowledge of animals, acquired through individual or inherited experience, or even through purposeful training. A typical, popular way of implementing this method is that of artificial neural networks. Here, the network inputs are fed with the values of some measurable attributes supposedly sufficient to individuate an object among all possible ones. The network output indicates the guessed-at result, i.e., the presence or absence of the searched-for type of object. Here, the rule for recognizing the object is not implemented through a deductive process (if … then … otherwise), starting from premises, but materializes into a set of connections between the basic elements of the network (neurons). This set of connections determines the selectivity of the network with respect to the input attribute values and is found as the one that minimizes the number of erroneous recognitions during the training and verification processes. Summarizing, the method searches for the most frequently experienced analogies. 9 The cardinal problem of machine learning is the proper selection of attributes. In fact, referring to the paradigmatic recognition problem, given a type of object, this can be described and individuated in different ways. Of course, the fewer attributes given, the more ambiguous the recognition. The more attributes given, the more selective the recognition. But, unfortunately, the examples necessary to perform the recognition with the desired percentage of success grows exponentially with the number of attributes that describe it. This basic limit to the applicability of machine learning is known as the curse of dimensionality. 10 Machine learning is usefully applicable only to problems where a sufficiently large number of valid examples is available. 11 Anyway, the choice of the 'right' attributes is left to the user. Any choice defines a class of possible objects which may incidentally include not only the searched ones but also other unintended objects. Vice versa, some wanted objects can be excluded by an improper choice of attributes (see the nice example of Beery, Van Horn, Perona 2018). Thus, results of machine learning depend on this choice. This problem is well known to statisticians as the reference class problem.
In some cases, it is possible to circumvent the problem of this choice by learning the most proper attributes from the examples themselves. In these cases, very simple generic attributes are used to learn more specialized attributes, using successive layers of a neural network, implementing so-called deep machine learning. A popular example illustrating this concept is the neural network used to model the recognition mechanisms of the human visual system, starting from retinal low-level neuron layers which recognize elementary objects (lines, crosses, etc.). They are used as attributes for successive layers, up to the high-level cortex layers, which detect the objects.
Ultimately, machine learning methods lean on the hypothesis of generalizability. No one can be certain that things will turn out as they did in the examples. The inexorable fate of Bertrand Russell's famous inductivist turkey is always lurking (Jefferson and Heneghan 2020). 12 Still, even in the cases where generalizability works, the rules identified during the training process in neural networks cannot be referred to significant aspects of the problem, as in the rationalist approach. These rules are in the form of indecipherable connections (Yarden 2012). This inherent semantic opacity is a serious obstacle for the use of these techniques whenever machine errors may cause damage or infringe on rights. This marks a clear limit to the use of these techniques in human affairs (work, economy, politics, law, etc.). To highlight the actual impact of this problem, it suffices to cite the right to an explanation enunciated in Recital 71 of the General Data Protection Regulation of the European Union (GDPR) ' … the data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her … '. 13 In fact, a well-known problem observed in practical applications is the so-called bias, the unfair working of machine learning. The bias may be caused by numerous factors, other than by unfair training examples, such as the use of an insufficient number of examples, the improper choice of the reference class, the insufficient or excessive number of attributes. In addition, opacity hides the causes of bias, and when machine learning systems interact, bias is reproduced and amplified. 14 In technical and scientific fields, bias is generally well controlled because of the reproducibility of operating conditions. In many other fields, such as economics, law, linguistics, and even art and politics, it is more difficult to defend against bias, because of the lack of homogeneous examples, the difficulty in identifying the right attributes, and in interpreting the results. For these reasons, even when a fair number of examples is available compared to the number of characterizing attributes, machine learning is generally used for screening or advisory purposes, and for non-critical tasks. In the artistic field, for instance, machine learning has given some remarkable results, especially when used for the analysis and synthesis of sounds and images.
Today, the most interesting alternative to neural networks and similar machine learning approaches, is the Bayesian approach, so called because it is based on the famous probabilistic Bayes theorem. 15 This approach starts from the concept that reality is characterizable by probabilistic causation. According to this concept, a type of event is caused by another type of event if the probability of the former increases with the occurrence of the latter (Pearl 2002). Even if probabilistic causation must be handled with care (Cartwright 2007), it does provide a sort of semantic connection among objects and attributes.
In this case, the concept of probability is used to quantify a degree of belief. With reference to our paradigmatic recognition problem, given a set of examples, and our degree of belief about the environment, the probability of hypothetical different objects which could have caused the values of the observed attributes is calculated. This calculus is updated as long as new examples are available, until the probability of a single object clearly emerges over the probability of the other possible objects. In a nutshell, the Bayesian machine learning searches for the most probable object (i.e. the most plausible) given the observed examples. Notice how closely this approach recalls the principle of Multiple Explanations of Sextus Empiricus. 16 The Bayesian approach is rationally supported by the axiomatic theory of probability, along with the so-called dutch book argument. 17 Its current growing interest is fueled by the availability of powerful devices and efficient numerical techniques.

New subjects in the communication realm
Information theories, which are at the core of information transport technologies, as illustrated above, are not at all concerned with the content of messages, i.e. with their meaning for the communicating subjects. It is taken for granted that, since the subjects of communications are human beings, the universe of meanings is basically the same (Guzman and Lewis 2020), even in the great variety of traditions and linguistic structures. The telecommunications network design was guided regarding it as a pure transport service, neutral with respect to message content.
But this scenario has changed today. The protagonists of information flows are no longer human beings only. Progressively, computing machines have become part of the telecommunication scenario, not only as functional elements of the network, but as true communication subjects.
By now, machine-to-machine communication is at the heart of the fourth industrial revolution (Industry 4.0) and of the internet of things. There, machines reciprocally inform (trough monitoring, remote control, diagnostics, functional updates, etc.) using purpose-built artificial languages.
As far as human-machine communications is concerned, this topic has been long discussed and is rich with technical and scientific tradition. The traditional goal of these disciplines is to facilitate as much as possible the use of machines with the help of voice or gestural messages in several applications, from vehicle guidance to speechto-text conversion. Today some advanced prosthetics applications where direct information exchanges with the neural human system are implemented are of outstanding relevance. These applications open the door to new forms of man-machine communications that are yet to be explored.
Anyway, until some years ago, human-machine communications were using very schematic languages. Some familiar examples are GPS navigators or Interactive Voice Response Phone Systems that allow interacting with unmanned services.
In the most recent years, the evolution of image and speech recognition techniques, voice synthesis, and especially the studies of the structure of natural human language have led to a decisive change in the quality of communication between man and machine. The machine must today 'appear' a credible substitute of human interlocutors, reminding us in a more sensible way of the figure of the 'intelligent machine' of Turing.
In fact, the Internet is rapidly populating with automata that imitate the behavior of human subjects (conversational agents, socialbots, chatbots, etc.). These new communicators, equipped with AI, interpret human messages, elaborate them, formulate 'plausible' responses. In comparison to the past, the interaction with these communicators is dynamic and contingent, it adapts to the specific context, to the moment, to the interlocutor himself.
Despite this change of level, communication between man and AI equipped machines is still problematic, due to the different meanings of messages. Today, AI systems interpret reality using the statistical analysis of big data. The statistical analysis is a coarse reduction of the complexity of reality within the framework of local statistical occurrence of objects and events. Human thought is not well tuned to statistics. This becomes evident from the disagreements that often arise when discussing statistical data, especially when facing certain paradoxes, such as the (in)famous Simpson paradox. 18 Rather, what guides human thinking in making predictions and decisions is its innate aptitude to achieve a convincing, reason-grounded, explanation of events.
Consequently, those meanings that human mind and empirical AI systems attribute to the relationships between events are different in principle and may conflict. This constitutes an insurmountable, semantic barrier in the communications between human common sense and AI. 'If a lion could speak, we could not understand him' (Wittgenstein 1950).
All things considered, we could think of adapting ourselves to this diversity of meanings, as we do with people who have different opinions than ours. In substance, we may limit our trust in AI solutions only to certain types of problems, and within certain limits.
Still, the meaning mismatch with AI constitutes only one aspect of the problem of man-machine communication. The relationship of man with AI is strongly asymmetrical. When a human user surfs the net, he has only a restricted view of it. What is out of his field of view is largely precluded from his knowledge, even though it is lurking. AI machines have a knowledge that exceeds by far human knowledge. They can in fact immediately dispose of information available from big data and possess an indefectible memory. AI can appear very intelligent if not oracular. It disposes of personal information in an indirect way, through the association of information coming from various data banks even if these, individually, do not contain personal data. Finding coincidences in a large amount of data is virtually impossible for humans, but very easy for machines.
Added to this, there is another troubling asymmetry. If human-machine communications occur with visual and/or vocal modality, as is usual for the human user, this interaction simply involves different psychological nuances, which may also depend on the awareness, or not, of being in front of a machine. The machine, on the other hand, can collect the interlocutor's biometric data and other information without the interlocutor's awareness. The affair of the HAL computer in Kubrick's oft-cited film is paradigmatic, and the answer to the protagonist, 'I'm sorry Dave, I'm afraid I can't do that' is chilling.

Technical perspectives
The pandemic crisis has enhanced the dynamics of information flows in the global network, reshaping the habits of individuals, companies and institutions. So, it is easily foreseeable that some communication trends already active before the pandemic, such those related to e-commerce, teleworking, telecare, virtual reality, internet of things, etc., will be reinforced.
This would accelerate the already planned technical evolution of the network, as is envisioned for the next decade and beyond. The most basic developments are those aimed at guaranteeing a much more extended and stable accessibility to the network. This would imply a more sophisticated articulation of mobile radio communications, with the deployment of many radio stations in the sky rather than on the ground, to overcome obstacles to radio wave propagation in difficult environments or to cover inaccessible areas.
Traffic on the network will be organized in an even more flexible way through the subdivision of the network itself into different specialized networks (a bit like in freights transport, with the subdivision of freeways into different lanes).
The use of so-called cloud computing, i.e. the use of processing elements distributed in the network, which heavily supports present AI, will be extended. At the same time, to overcome the problem of latency in some 'real time' applications such as the control of industrial processes, the coordination of vehicles traveling in the same area, etc., it will be necessary to deploy computing units in the immediate vicinity of the user, 'at the edge of the network', implementing the so-called edge computing.
The edge computing will lead to a large multiplication of computing machines on the network, and to a consequent significant increase in overall energy consumption by the global network. This would imply the use of renewable energy sources and, at the same time, the development of more advanced low-power computing devices.
Soon, we may imagine that users could experience the sensation of being surrounded by a gigantic AI provider fulfilling their requests very nearly in real time. In addition, virtual reality will approximate the experience of physical reality through high-resolution vision devices, or even hypothetical holographic devices.

Conclusion
Information flows have undergone an exponential diffusion in recent decades, and access to the global network is going to cover the entire globe. Today, the exchange and the handling of information is so dense that it cannot take place without the aid of AI machines. The effectiveness of most recent AI techniques has proven to be satisfying in some specific applications, and even surprising, as in the case of automatic face recognition. At the same time, these techniques present severe problems because of their bias, their semantic opacity, and because of the asymmetry of man-machine interaction.
Other problems are related to the phenomena originating from communication dynamics in the global network, now amplified by the pandemic. For instance, the aggregation of social networks and the suggestion of influencers favor the creation of separate groups that lead to a fragmentation of public opinion and to a dilution of a shared common sense. This could imply a greater exposure not only to misinformation but also to the pitfalls of AI.
The great complexity of these problems is becoming widely recognized and addressed in several venues, both national and international. The central theme of the white paper prepared by the European Commission (2020) is precisely that of making the use of AI reliable: 'As digital technology becomes an ever more central part of every aspect of people's lives, people should be able to trust it'. Guided by this aim, the Artificial Intelligence Act of the European Commission (2021) proposes a regulatory framework for AI, classifying applications at different levels of risk, with different requirements for their assessment, and banning most critical applications. It is expected that this European approach will have a significant influence in a broader international context.