Authors: Lukas Esterle, John NA Brown
The
number of computing systems in our environment is constantly
increasing, and it’s not just the obvious individual devices we
have all added to our lives. This trend is accelerating even further
due to unseen advances in the areas of pervasive computing,
cyber-physical systems, the Internet-of-Things, Industry 4.0, as they
manifest in smart cities, smart homes, smart offices, and smart
transport. The numbers alone make centralized control problematic
from an engineering point of view, even without considering the speed
of dissemination and adoption. The vast and unmeasured diversity of
interfaces and interactional requirements are imposing an
as-yet-unmeasured increase in cognitive and physiological demands on
all of us. One way to lessen the impacts of these human and
technological demands is by offloading some control to some of the
individual devices. This not only relieves demands on
miniaturization, control systems, and server infrastructures, but
also relieves cognitive and physiological demands on the users, and
allows the devices to react more quickly to new situations, and even
to known or anticipated situations that unfold more rapidly than
current hierarchical control systems can accommodate.
One
approach to imbuing individual devices with more autonomy is to
design them to be self-aware. This would enable devices to learnabout themselves and their environment, to develop and refine thesemodels during runtime, and to reason about them in order to makeprofound decisions. Different levels of self-awareness have been
proposed, addressing the various degrees to which a computational
system can be aware. It has been demonstrated that this can improvesystem performance, even when collaborating with others.
We
offer an outline of three important factors that have the potential
to challenge the success of collaborating self-aware systems.
Situatedness
Systems distributed in a real-world environment
will perceive that environment differently, even when their
abilities to perceive it are equal and they are in close proximity
to one another. The following figure depicts a network of 3 smart-cameras, able
to perceive their environment and process this information locally.
This network illustrates two problems with respect to situatedness
of individual devices. Camera A and B are physically very close,
mounted on a common pole. However, due to their constrained
perception of the world, they cannot perceive the same objects at
the same time. On the other hand, cameras C is mounted on a house
and observes the same area as camera B but from a different
perspective, which means that their individual perceptions of a
simultaneously viewed object can be different. Figure 1 shows us
that, while camera B sees a smooth round object that is mostly
green, camera C observes an object of non-uniform shape, that is
mostly red. Even if they share their information, they would need to
also share an understanding of their differing perspectives in order
to combine their perceptions and recognize that they are seeing the
same object.
Heterogeneity
When operating alongside or in collaboration
with others, a system might not be able to simply make assumptions
about the abilities and behavior of another system. As an example,
please consider two digital cameras that both perceive their
environment. Even though these two cameras may observe the same
object in the same way, their perceptual tools may differ, and this
could conceivably result in completely different perceptions of the
same object. One might imagine a black-and-white sensor and a
standard color sensor in the two cameras. Here the cameras cannot
simply exchange color information about objects as this would not
result in a common understanding. In a similar case, different zooms
can lead to different resolutions permitting a camera to perceive
details another camera might not be able to see.
Individuality
Systems are often designed to perform very
specific tasks. If they are intended to collaborate with others,
this collaboration is usually clearly defined at the time of their
design. If we want future systems to be able to establish
collaboration autonomously, without a priori knowledge of their
potential collaborators, we will have to build them with the ability
to model the potential collaborators that they encounter. In
addition, they have to be able to model the behavior of those new
collaborators and adapt their own behavior according to larger
collaborative models that were developed on the fly.
Conclusion
Current
work on self-aware systems focusses on the individual computing
systems, rather than on defining, designing, and developing features
that would enable and improve heterogenous collaboration during
runtime. In order to facilitate collaboration among systems, we have
proposed additional levels of networked self-awareness [1].
Implementing these additional levels of networked self-awareness will
enable systems to develop adaptable models of their environment, of
other systems, and of themselves, as well as the ways in which those
models interact and impact one another. Such models should be able
to meet the challenges outlined above, and collaborate with other
systems in achieving their shared and unshared goals.
References
- L. Esterle and J. N. Brown, "I Think Therefore You Are: Models for Interaction in Collectives of Self-Aware Cyber-physical Systems," Transactions on Cyber-physical Systems, under review, p. 24, 2019.
No comments:
Post a Comment