Designing the Next Generation of Connected Devices in the Era of Artificial Intelligence

: The era of the Internet of Things has enabled designers to turn everyday objects into highly technological objects. We see three implications for improvement: 1. Forced enhancement of everyday objects. 2. Privacy and security threats. 3. Increased friction. We assume that there will be powerful Artificial Intelligence (AI) included in each object, as well as a central intelligent assistant which will be able to communicate between devices, just as devices can communicate between each other. Underlining this assumption we give an introduction to AI. Our hypothesis is that the shape of intelligent objects will adapt, through the inclusion of AI in everyday objects. The basis of our research consists of case studies, following a specifically developed framework. We refine our findings to understand how an object's smartness and shape correlate. We conclude with a confirmation of our hypothesis and define four design goals for intelligent objects.


Introduction
Humans have been designing throughout history: from the first tools, over architecture and along with it furniture design, to basically all existing goods and services nowadays (Manzini, 2015, p.12). Today, the group of objects which experiences a strong increase in complexity of design are smart objects. Mark Weiser described the meaning of a smart object already in the early 1990's through the term ubiquitous computing. In his understanding, people do not want to interact with computers but want to do tasks. For this he wanted to move interactions away from the computer itself by including computer technology into objects that are all around us, or ubiquitous, with which we then can complete our desired task (Moggridge, 2007, p. 461). This vision has been fulfilled in quantity over the last years, meaning that we do now have a plethora of devices with computer technology inside that are partially connected and with which we interact differently than before. At the same time, the market has brought to life many devices which are redundantly technically enhanced. Another topic which has become more apparent, looking at the evolution of smart objects, is the issue of privacy. Connecting all smart devices happens over the internet which brings us to the term of the Internet of Things, which describes the current paradigm of objects which can be readable, recognizable, locatable, addressable, and/or controllable via the Internet-whether via RFID, wireless LAN, wide-area network, or other means. The known privacy and security problems of the internet will possibly be enhanced through the interconnectedness of all devices (National Intelligence Council [NIC], 2008, p. 27). Additionally, an ever increasing number of connected devices leads us to having to interact with many single connected devices which can be described as friction. When friction increases, the actual usefulness of the connected devices diminishes and we feel more frustrated about the overall experience (Hindi, 2015).
At the center of our paper we pose the assumption that soon there will be powerful enough Artificial Intelligence (AI) which will be included in each object, as well as a central intelligent assistant which will be able to communicate between devices, just as devices can communicate between each other. This system can execute broad tasks in various fields autonomously and to perfection. To underline this assumption we give a short definition of AI and an overview of existing systems. Our hypothesis is that, through the evolution of AI and its inclusion in everyday objects, the shape of these "intelligent objects" will adapt in a particular way. The basis of our research consists of case studies of existing smart objects which are analyzed based on a specifically developed framework which is further described prior to the case studies. Following, we refine our findings to understand better how an object's smartness and shape correlate. Finally, we interpret our results in regards to our hypothesis.

Artificial Intelligence
Through the rise of computers in the middle of the 20th century, it was possible to turn theoretical understanding of what artificial intelligence means into real applications. The limitations of computing power at this time could not fulfill the expectations in regards of the development of AI and has continued to hold it back until very recently . It is hard to define one true definition of AI since it has changed over time and can often address different dimensions. As a first way of making sense, the definitions can be grouped in four categories which address an AI system's capacity to think like humans, think rationally, act like humans and act rationally. The definitions range from: "'[The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving, learning... '(Bellman, 1978), 'The study of the computations that make it possible to perceive, reason and act' (Winston, 1992), to 'The art of creating machines that perform functions that require intelligence when performed by people' (Kurzweil, 1990), 'The branch of computer science that is concerned with the automation of intelligent behavior' (Luger & Stubblefield, 1993)" (Russel & Norvig, 1995, p. 3-5).
The main distinctions are between thinking and acting, as well as between human-like and rational. This differentiation is important since it indicates that there are two layers affecting where AI will be experienced in a product. Whether acting happens rationally or humanlike, it will be the part of AI with which we interact when using an object. The thinking part on the other hand, will happen on an underlying level which most likely is not perceived by the user but has a certain influence on our interaction with it.
One way to get a first understanding of AI is to distinguish it through 6 main fields: 1. Artificial Neural Networks; are computer programs which try to mimic the way a biological neural network works. An example would be the prediction of the outcome of sporting events. 2. Fuzzy Logic; is used to deal with uncertainty in problems and to enable machines to reason more similar to a human. An example would be the electronic control of car engines to make them more efficient and stable. 3. Software Agents; are programs that act autonomously, can interact with each other, perceive their environment; take initiative, run processes continuously and are goal oriented. The best example of this field are AI assistants as we know them today like for example Google Now. Further examples are movie and music recommendation programs used on platforms like Netflix and Spotify. 4. Knowledge Based Systems; are usually used for tasks that require decision-making, in order to automate the process. To make decisions more human-like, they are often combined with fuzzy logic. An example therefor is the way the IRS in the United States uses this type of AI to analyze tax returns and assign a risk score for each taxpayer. 5. Natural Language Processing; is the capability of a computer to understand and generate natural human language. Known natural language processing programs are Apple's Siri, Windows' Cortana and Amazon's Echo. 6. Genetic Algorithms and Evolutionary Software; are problem solving systems which follow the principles of natural selection. These follow a very complex and technical approach to find the best solution for a given problem. This type of AI can be mainly found in the field of financial application or production optimization (Sabhnani, Rao, & Panchal, 2001).
With the transition of AI from research to application, new fields like Deep Learning are emerging. Especially platforms like Google and Facebook are making active use of the development of AI and we can see that the latest AI programs and devices do not only belong to one of the above mentioned fields anymore but usually are a combination of many (Metz, 2016).

Research Framework
Derived from the possibilities of Artificial Intelligence, we define the "two-layer design approach". This means that smart objects always have an underlying-layer which refers to its technological constitution like computing power, sensors included and connectivity technology used like Bluetooth, Wi-Fi or RFID, while at the same time having a top-layer which refers to its visual elements with which we understand, use and manipulate the object. This approach is used in the following case studies to easily differentiate two fields in which design plays a different role. To better understand the criteria of this approach, it is important to understand how the shape of an object influences our understanding of its function and our emotional connection with it and how this translates into the defining factors of the top-layer. For the underlying-layer it is important how our interaction with the object is affected through technology used within the object, as well as the visible technology with which we have to interact directly such as displays or buttons, to name just two examples.

The Relationship Between Shape, Function and Emotion
When interacting with objects, we depend on the objects' ability to make us discover and understand how to interact with them and where to perform these actions. With more complex objects we tend to require some kind of explanation while with simpler things they should be self-explanatory. Especially with smart machines, the understanding on how objects can deliver their function can become more and more frustrating. This comes down to the difference in the way humans and machines think. Machines usually strictly follow a certain set of rules which have been developed by its designers and are generally unknown to its user. As long as the machines follow this approach correctly and give us the wanted result, we do not question their doing. Only when they do not work in the expected way, we humans try to understand their reasoning but since we think in more abstract and less logic ways we can't and we are left frustrated (Norman, 1985, pp.3-6).
Don Norman defines in Design of Everyday objects these factors which influence the relationship between the shape of an object and our perception of its function through six elements: Affordances, signifiers, constraints, mappings, feedback and conceptual model. Affordances indicate how objects could possibly be used while signifiers communicate specifically where the usage should take place. Constraints are indications of limitations of how an object can be used and can be divided in physical, cultural, semantic, and logical (Norman, 1985, p. 125). Mapping refers to the way controls and displays are laid out on an object and can be emphasized through natural mapping which uses spatial analogies to lead to immediate understanding. Feedback is the way an object or a system informs you about the status of its actions and that it is working according to you request. A conceptual model is a way for us to explain ourselves how an object works and can also be described as a model we have in our mind that represent a very subjective understanding of the working of an object (Norman, 1985, p. 45).
While it is important to understand how the shape of an object can already influence our understanding of their function it is also important to acknowledge their influence on our emotions and how this has an effect on the overall experience of a product. This is because "emotion and cognition are thoroughly intertwined" (Norman, 2004, p.16). Again referring to Don Norman but his later book Emotional Design, he defines three layers in products which affect our emotional relationship with an object: visceral, behavioral and reflective. The visceral layer is about how the appearance, touch, and feel of an object influence its first impact on us. The behavioral design speaks to the interactive part of ourselves and arises through the way an object functions, can be understood and used and how it physically feels when being used. Maybe the strongest layer, the reflective layer, can influence the overall impression of an object the most since it refers to the messages and meaning we see in an object or also the cultural references we perceive through it (Norman, 2004, pp. 63-89).
When analyzing smart objects and extrapolating their potential morphological evolution into the era of AI then we should refer to these elements in order to define changing factors and characteristics.

The Relationship Between Technology and Interaction
"Interaction design started from two separate directions, with screen graphics for displays and separate input devices, but it got more interesting when the hardware and software came together in products. Then along came the information appliance, implying that technology would start to fit into our everyday lives, and when the Internet connected everything together, we found ourselves designing complete experiences." (Moggridge, 2007, p. 293) In this David Kelley quote from 2004 it becomes already clear how the evolution of technology and its combination with everyday objects has been influential in the way we understand interactions with those objects and how this affects the overall experience. In particular, when analyzing objects for their interaction capabilities through technology, they need to be examined for their graphical user interface (GUI) and their tangible user interface (TUI). The GUI defines the intangible interaction components as they can be found on some type of computer screen. The TUI, on the other hand, is made up of a tangible representation of the technological capabilities underlying the object but it is physical and can be directly manipulated, giving information a physical form (Moggridge, 2007, p. 527).
When analyzing according to this relationship we have to focus on the difference between a graphical and tangible user interface and try to understand how a rise in technological complexity within an object leads to either a stronger use of one or the other type of interface. At the same time, it is important to analyze how a certain type of interface and the technological complexity make the overall experience of the interaction with the object more complex as well.

Additional Defining factors
The before described two-layer design approach is applied to support the detailed research on single objects themselves. While this can give insights into the main characteristics of an object's shape and its implications as well as it can enable a thorough analysis on technical elements and their significance for the interaction with the object, it does not give the whole picture. To widen the scope of the research and to enable faster and better comparability between the case studies we define following 5 sectors of observation: 1. Ecosystems: Considering the object in the context of the history of design. 2. Software: Put the object in perspective with existing AI programs as well as the ones currently under development. 3. Hardware: To which degree the technology is part of a standard hardware system which is able to generate local data, process and store it, as well as communicate it. 4. Communication: Indicate whether a certain protocol for communication is being followed and how it affects the interaction. 5. Design: Focusing on aesthetics and interaction to understand the effect on human needs, wants and emotions.

Case Studies
The following case studies are being described to give an understanding of the current design of smart objects. By following our framework we try to provide more comparability between the single case studies in order to achieve more significant results. The case studies are chosen in such a manner that many different fields of application are explored. The most representative cases are being described in this chapter while additional case studies are summed up in the following graphic.

Nest Thermostat
Electronic thermostats that operate partially on their own without regular human input have a history of their own and can possibly stand as an example of the evolution of design of objects when technology and some type of intelligence is added. The nest thermostat finds itself at the top of this evolution thanks to its design and its technological capabilities. It exists of an outer ring around a circular display and can be connected with an app. The design is derived from Henry Dreyfuss' design of the T-86 thermostat for Honeywell in 1953. The outer ring is an obvious affordance of the way to turn the temperature up or down by turning the ring, similar to a knob. As a tangible user interface it would be too limited to access all of nest's functions, thus it is complemented by a graphical interface in the shape of the circular display. The shape and the way it is installed on the wall can give several conceptual models. First, it finds its place where the less intelligent thermostat has been placed before, giving you a clear idea of its function in the house. Second, the act of turning the ring to increase or lower temperature resembles our behavior with the classic heater in order to manipulate temperature. Third, its round shape and exposed position on the wall resemble the way we put analog clocks up. The latest generation of nest has picked up this model and enforced it by adding a analog clock screen mode. The optical elements and the way one interacts with them have a strong effect on both the visceral and behavioral level. Also, the fact that nest is still a rather exclusive product, implies that on a reflective level it can create a strong self-image of its users. Nest has proprietary intelligence and can learn over time your temperature preferences at certain hours and can create patterns, everything in order to save you energy. In combination with the app, these patterns can be manipulated and nest can be remote controlled from anywhere in the world. Its capabilities in terms of intelligence have risen after nest was bought by google and can now access google technology which uses strong AI algorithms. In terms of feedback though, nest's only possibility is indirect feedback in terms of rising or sinking heat and the indication through the display. If something goes wrong, this feedback might not be sufficient. Also, understanding the reasoning behind the algorithms which create its automated behavior are not always understandable to the user and thus could lead to possible confusion (Pernice, 2015).

Amazon Dash Button
The amazon dash button might not seem like the typical design product in the first place and neither is it utterly smart. It's affordance, which is to be pushed, does not immediately indicate which kind of function it holds. A signifier is given through a brand name which takes up most of the visible space of the product. It indicates which type of product can be ordered through the push of a button. The overall shape does not have much of a rich design language and can at most remind of a remote control for garage doors. While this is not a strong conceptual model, it is enough to imply the main function: to trigger one specific action from a remote position, which is exactly what a dash button does, it triggers an order within your amazon account of the indicated brand's product, so it will be delivered to you within two days. The dash button is without any graphical user interface but in order to make it work seamlessly it still requires a rather intensive setup process within the amazon platform which has to happen on an external screen. The button, which is its main element, is most likely too basic to be really called a tangible user interface, especially given that the shape itself does not give a clear indication on its function without the knowledge of the products context. While a small LED is added to give information about the status of the order, this is a very limited type of feedback. If anything happens between the push of the button until the delivery of the ordered goods, the feedback has to be searched on the amazon platform. While the design does not have a big effect on a visceral or reflective level, it does impact the behavioral level quite significantly. By bridging the gap between "I would like to buy this" to "I just bought this", without having to go through an extensive search and buy process it is making an everyday task a lot simpler while lowering our overall engagement with a screen operated device. Overall, the amazon dash buttons are not very smart, or stick out in terms of physical design, but they do make life easier if you are willing to give up some of your freedom as a consumer, such as freedom of choice and price comparison. The newly released amazon echo, which is a voice recognition powered AI assistant in your house capable of ordering everyday goods just by the utter of the desire by its user, possibly will render the existence of the dash buttons useless since it is giving a technologically advanced alternative with an even more user friendly experience.

Dumb to Smart continuum
When trying to understand in which way technological complexity influences the shape of the smart objects we realize that there is a huge difference in terms of smartness of the products. We see that products which are very smart are usually technologically more complex, require more direct ways of interaction and thus tend to have a proprietary graphical user interface or require an external screen. Rather dumb objects usually tend to have a lower technological complexity but instead indicate through their shape already how they can be manipulated and what their function is. These objects usually do not have a GUI but sometimes even lack well designed tangible user interfaces. To make this finding more clear we place the analyzed objects along a "dumb-to-smart continuum" which helps us to quicker understand the characteristics of an object. One observation that needs to be pointed out, is that genuinely dumb objects, through the additional, excessive application of technology, seem to be forcefully turned into smart objects, while rather complex objects which lower their technological complexity in order to create a more natural way of interaction seem to create a more pleasant overall user experience.

Interface and Fallback
Feedback is critical for a pleasant interaction with smart objects and is usually given through a certain interface. As described before, usually technological complexity prefers a GUI, since it is easier to give specific feedback. When the goal is to design complex objects in a way that they can naturally be interacted with while not losing any of their complexity and technical functionality, it becomes difficult to give proper feedback. Especially when a desired action is not executed in the way the user is expecting it, frustration is usually the result if insufficient feedback is given. In order to avoid this, certain fallbacks have to be set in place for exactly these situations of failure. Ideally, these fallbacks consist of tangible elements that can indicate the reason of failure and help to reverse the failure to create the desired outcome or an equally pleasing alternative.

Interpretation of Findings
According to our hypothesis, the shape of intelligent objects in the era of AI will change. When interpreting our findings of the analysis of existing smart objects we see that with more complexity usually an adaption in shape takes place too. With AI being ubiquitously present in all devices, making them communicating with each other and learning from one another, we can say that smart objects will become significantly more complex but also more intelligent, which is why we refer to them in this scenario as intelligent objects. Following this line of thought, there are two possible changes in shape: one, is that with rising complexity the shape and interfaces will become even more complex or will be so complex that they have to completely be outsourced to external screens; two, is that the rising complexity will make a digital interaction so unfriendly for the user that the added intelligence will be used to enable designs that focus completely on tangible interfaces and natural interactions between human and objects. We want to believe in the second possibility, since making functional products that are of value to users and enhance our human traits is what design should truly aim for. In order for the second possibility to become reality we are concluding with four goals for the development of design of intelligent objects: