Monday, June 24, 2019

Can AI be Decolonial?

By: Asma Mansoor

Associate Editor: Muneera Bano (@DrMuneeraBano)


In a world marked by economic, racial and gender-based hierarchies, can AI be decolonial?

If not, can it become decolonial? 


These questions might elicit criticism since computing and its associated fields are generally assumed to be democratic in flavour, working in a realm where constructs such as race and gender are thought to be reduced to irrelevant abstractions. But it is precisely this reduction that I find problematic specifically in a world in which many regions are still experiencing a colonial hangover in the form of neocolonial exploitation. This exploitation, galvanized by various Capitalist corporate structures, manifests itself via technological interventions, such as surveillance and drone technology, biotechnology and the abuse and degradation of indigenous environments in the garb of progress. Since the fifteenth century onwards, European colonization has been supplemented by technological advancements which have helped consolidate the various Others of the West. As cyberspace expands and AI becomes more autonomous, what is gradually becoming a matter of concern for numerous people living in the Global South like myself, are the possible colonial implications of these advancements. Our fears are not unfounded. The CIA’s Weeping Angel program, that permitted the installation of spying software on smart TVs, was sanctioned for devices headed to countries suspected of harbouring and supporting terrorism. This reflects how surveillance technologies are operating as tools of Othering in the hands of Euro-American power structures, inferiorizing peoples and countries. Technology in all its forms is helping supra-national Capitalist conglomerates to become increasingly colonial as they impose their sovereign rights to regulate and manipulate the technology that they ration out to states and groups as we saw in the case of Facebook. So to question whether AI, as a component of this technological colonization, can be decolonial becomes a rather loaded question which cannot be answered in a simple manner.


What I imply by decoloniality is not an end of colonization, per se. I take it in the connotations of Walter Mignolo who defines decoloniality as a non-hierarchical inter-epistemological exchange that encourages epistemic disobedience and delinking from its colonial epistemologies in order to build a world where many worlds can exist in a state of non-hierarchical epistemic osmosis. However, our world is also an age of the Empire, where the Empire, according to Michael Hardt and Antonio Negri, is the sovereign power that regulates global exchanges. As opposed to the decolonial ethos which advocates a cross-cultural exchange of knowledge without centralizing any mode of thinking, this Empire also encourages this decentered osmosis, at least in theory if not in practice. What makes the operations of this global Empire different from decolonial politics is that the Empire upholds its epistemic sovereignty and cannot afford to decentralize its economic, technological and intellectual supremacy. Computing and AI are vital components in this global regulatory apparatus.  

Therefore, I believe that at the present moment in time, AI is not decolonial unless the formerly colonized appropriate it for their interests, a task which I am convinced is fraught with obstacles. AI responds to the master because it is programmed by the master who needs to uphold global hierarchies and inequalities. It operates as the Golem in the hands of the Global Capitalist masters, ensuring on their part who is to be excluded and who is to be included and the extent to which they are to be included. 

Biases are encoded within its very algorithmic genes as the works of Safiya Umoja Noble and David Beer indicate. It inherits the aesthetic biases of its makers, including those governing the perceptions of race and gender. An international beauty contest judged by AI machines in 2016 revealed that these machines did not consider dark skin as beautiful. The driverless cars are more likely to hit people with darker skin. AI-based voice assistants have been reported to respond less to different accents or the voices of women.

Like Macaulay’s Minute Men,  AI is also a product of colonial mentality. It does not only absorb the colonisers’ ways of knowing but also the prescriptions of bodily aesthetics. However, at the current moment in time, AI is better than Macaulay’s Minute Men who experienced displaced and schismatic identities in their effort to become like the Masters. The AI, at present, is not aware of these complexes. Perhaps, in a few years, as it gains sentience, AI would develop similar complexes in its efforts to become more human. At the moment, it is fully complicit with the neocolonial agenda wherein all Others are equal but some Others are more Other than Others. It keeps an eye on rogue elements, further marginalizing those who are already marginalized. It is not decolonial precisely because it is supplementing the hierarchies that decoloniality sets out to dismantle. 

So what needs to be done?  Perhaps, a more acute awareness of what goes into its programing needs to be rethought and that can be done by taking on board, social, philosophical and literary theorists. Perhaps then can the decolonization of AI truly begin.

Sunday, June 9, 2019

Self-awareness and autonomy in pervasive computing systems


Associate editor: Danilo Pianini (@DanySK86)

The number of computing systems in our environment is constantly increasing, and it’s not just the obvious individual devices we have all added to our lives. This trend is accelerating even further due to unseen advances in the areas of pervasive computing, cyber-physical systems, the Internet-of-Things, Industry 4.0, as they manifest in smart cities, smart homes, smart offices, and smart transport. The numbers alone make centralized control problematic from an engineering point of view, even without considering the speed of dissemination and adoption. The vast and unmeasured diversity of interfaces and interactional requirements are imposing an as-yet-unmeasured increase in cognitive and physiological demands on all of us. One way to lessen the impacts of these human and technological demands is by offloading some control to some of the individual devices. This not only relieves demands on miniaturization, control systems, and server infrastructures, but also relieves cognitive and physiological demands on the users, and allows the devices to react more quickly to new situations, and even to known or anticipated situations that unfold more rapidly than current hierarchical control systems can accommodate.
One approach to imbuing individual devices with more autonomy is to design them to be self-aware. This would enable devices to learnabout themselves and their environment, to develop and refine thesemodels during runtime, and to reason about them in order to makeprofound decisions. Different levels of self-awareness have been proposed, addressing the various degrees to which a computational system can be aware. It has been demonstrated that this can improvesystem performance, even when collaborating with others.
We offer an outline of three important factors that have the potential to challenge the success of collaborating self-aware systems.



Situatedness

Systems distributed in a real-world environment will perceive that environment differently, even when their abilities to perceive it are equal and they are in close proximity to one another. The following figure depicts a network of 3 smart-cameras, able to perceive their environment and process this information locally.


This network illustrates two problems with respect to situatedness of individual devices. Camera A and B are physically very close, mounted on a common pole. However, due to their constrained perception of the world, they cannot perceive the same objects at the same time. On the other hand, cameras C is mounted on a house and observes the same area as camera B but from a different perspective, which means that their individual perceptions of a simultaneously viewed object can be different. Figure 1 shows us that, while camera B sees a smooth round object that is mostly green, camera C observes an object of non-uniform shape, that is mostly red. Even if they share their information, they would need to also share an understanding of their differing perspectives in order to combine their perceptions and recognize that they are seeing the same object.

Heterogeneity

When operating alongside or in collaboration with others, a system might not be able to simply make assumptions about the abilities and behavior of another system. As an example, please consider two digital cameras that both perceive their environment. Even though these two cameras may observe the same object in the same way, their perceptual tools may differ, and this could conceivably result in completely different perceptions of the same object. One might imagine a black-and-white sensor and a standard color sensor in the two cameras. Here the cameras cannot simply exchange color information about objects as this would not result in a common understanding. In a similar case, different zooms can lead to different resolutions permitting a camera to perceive details another camera might not be able to see.

Individuality

Systems are often designed to perform very specific tasks. If they are intended to collaborate with others, this collaboration is usually clearly defined at the time of their design. If we want future systems to be able to establish collaboration autonomously, without a priori knowledge of their potential collaborators, we will have to build them with the ability to model the potential collaborators that they encounter. In addition, they have to be able to model the behavior of those new collaborators and adapt their own behavior according to larger collaborative models that were developed on the fly.

Conclusion

Current work on self-aware systems focusses on the individual computing systems, rather than on defining, designing, and developing features that would enable and improve heterogenous collaboration during runtime. In order to facilitate collaboration among systems, we have proposed additional levels of networked self-awareness [1]. Implementing these additional levels of networked self-awareness will enable systems to develop adaptable models of their environment, of other systems, and of themselves, as well as the ways in which those models interact and impact one another. Such models should be able to meet the challenges outlined above, and collaborate with other systems in achieving their shared and unshared goals.

References

  1. L. Esterle and J. N. Brown, "I Think Therefore You Are: Models for Interaction in Collectives of Self-Aware Cyber-physical Systems," Transactions on Cyber-physical Systems, under review, p. 24, 2019.

Monday, May 20, 2019

Architectural Security for Embedded Control Systems

Authors:  Jan Tobias Mühlberg (@jtmuehlberg), Jo Van Bulck (@jovanbulck), Pieter Maene (@pmaene), Job Noorman, Bart Preneel, Ingrid Verbauwhede, Frank Piessens
Associate editor: Danilo Pianini (@DanySK86)

Security issues in computer systems are pervasive and embedded control systems – from smart home appliances, the Internet of Things, to critical infrastructure in factories or power plants – are no different. This blog post summarises a line of research on architectural support for security features in embedded processors, with the potential to substantially raise the bar for attackers.

Security in Embedded Control Systems

Sensing, actuation and network connectivity are the basic building blocks for smart infrastructure. In a smart building, sensors detect human presence, measure air quality or room temperature, and communicate these measurements to control systems that operate lighting, air conditioning or other appliances. In Industrial Control Systems (ICSs) and smart factories, similar sensing setups may detect delays, production faults or hazardous situations, and determine the need for human intervention. Supply chain optimisations or alerts may be triggered in response. A smart car can detect dangerous objects on the road and raise the attention of the driver while triggering similar alerts in nearby cars. A smart city may combine all these scenarios and accumulate, aggregate and evaluate sensor inputs at an un-preceded scale to optimise traffic flow, air quality, noise, power consumption, and many other parameters with the overall aim to facilitate sustainable use of resources and to increase the quality of life for the city’s inhabitants.

With the continuation of the trend to augment appliances and infrastructure with computerised sensing, actuation and remote connectivity, "smart environments" imply a range of threats to our security and privacy, and ultimately to our safety. The key amplifier for these threats is connectivity. The use of publicly accessible long-range communications – such as the internet or wireless communication technology – where data in transit may be subject to manipulation by adversaries, lead to an extended attack surface and to the exposure of sensing and control systems to attackers.

Of course, smart environments are insecure. Typicall sensing and control networks are even less secure than our personal computers: Many of these systems were not designed with security in mind, just because they were never meant to be connected to a global communications infrastructure and thereby exposed to attacks. A substantial role is played by legacy systems that were developed and deployed in a time when the idea of making these systems "smart" by permanently connecting them to, e.g., supply-chain management, was technically infeasible and not anticipated. Gonzalez et al. argue that two thirds of ICS vulnerability disclosures had an architectural root cause, while about one third of the vulnerabilities were due to coding defects. The problem is pervasive and control systems across critical domains suffer from vulnerabilities resulting in exploits: Since the Stuxnet incident we understand that industrial equipment can be physically damaged through cyber attacks.


At DEF CON 22, Scott Erven and Shawn Merdinger discussed the problem for medical devices and hospital equipment, and in 2017 UK hospitals are amongst the institutions that were hit hard by the WannaCry ransomware attacks against PCs, which encrypted and denied access to critical data until a ransom is paid. In the same year, the NotPetya ransomware ravaged Maersk's world-wide network, taking down harbour infrastructure and shipping routes and caused substantial real-world damage.

Screenshot of the WannaCry ransomware that infected PCs at a global scale in 2017, leaving mission-critical data in institutions such as many British NHS hospitals inaccessible.

With the advent of complex infotainment systems and remote connectivity in automotive vehicles, researchers discovered vulnerabilities that allow attackers to remotely control critical functionality of cars (e.g. Checkoway et al., and Miller and Valasek). And attacks against the Ukrainian power grid, led to pervasive and lasting blackouts during the Russian military intervention around Crimea.

Architectural Threat Mitigations

Our approach to architectural support for security leverages light-weight Trusted Execution Environments (TEEs). Intuitively, we enable the development of secure software by providing hardware extensions that guarantee that a computer will consistently behave in expected ways. Modern Trusted Computing systems provide some form of "enclaved execution" in a TEE, that protects a software, the enclave, from malicious interactions with other software. Ultimately, TEEs relies on cryptography and provides mechanisms to securely manage and use cryptography in distributed software systems. To date, a range of implementations of this idea exist, which are aimed at different application domains. Maene et. al provide a comprehensive overview of the available technology and its capabilities.

Since 2012 we have been working on Sancus, an open-source TEE solution for embedded systems security. The current incarnation, Sancus 2.0, features strong software isolation, efficient built-in cryptography and key management, software attestation, and confidential loading of enclaves. While we can currently not guarantee that our processor design is free of architectural vulnerabilities, we developed Sancus as an open security architecture, for which we can collectively develop a clear understanding of execution semantics and the resulting security implications. We advocate and aim for formal approaches to reason about the security guarantees that these architectures can provide, including the absence of micro-architectural bugs and side-channels. We consider such a principled approach essential in an age where society increasingly relies on interconnected and dependable control systems. Closed commercial products in this domain are certainly responsible for important achievements, e.g., secure virtualisation extensions, TPM co-processors, and enclaved execution environments such as Intel SGX, ARM TrustZone, and AMD SEV. However, we strongly believe, that it is close to impossible for the vendors of these products to comprehensively guarantee the absence of certain classes of critical vulnerabilities in their highly complex products.

Sancus builds upon the openMSP430, an open-source implementation of Texas Instruments’ MSP430 processor core. The MSP430 and openMSP430 are designed for the Internet of Things and embedded control systems: they are relatively inexpensive low-end devices that feature a very low power consumption. Natively, the device provides little security, which is very common for processors in this domain. Sancus guarantees strong isolation of software modules, which we refer to as Protected Modules (PMs), through low-cost hardware extensions. Moreover, Sancus provides the means for remote parties to attest the state of, or communicate with, the isolated software modules. Importantly, our implementation of these security features is designed to be small and configurable: Sancus-secured openMSP430 processor cores can be synthesised for a varying number of PMs. The configuration necessarily affects chip size (in gates) and power consumption. Yet, even the biggest (sensible) configuration will still result in a moderately cheap processor (probably below USD 1 per unit) and less than 6% increase in power consumption in active cycles. The Soteria extension of Sancus even allows for offline software protection in low-end embedded devices.

Authentic Execution

Based on TEE primitives in commodity processors and in Sancus, we have developed approaches that provide strong assurance of the secure execution of distributed applications on shared infrastructures, while relying on a small Trusted Computing Base (TCB). Here, "secure execution" means that application modules are protected against a range of attacks that aim to steal secrets from the application, modify the application or tamper with the application’s control flow (notions of confidentiality can easily be implemented on top of authentic execution). These guarantees hold even in the presence of other applications, malware or an untrusted operating system executing on the same (shared) processor. A cryptographic process called remote attestation guarantees that components of a distributed application are mutually assured of the authenticity and integrity of other components of this application, regardless of whether these components execute on the same processor or on a remote site, or even on systems controlled by third parties such as cloud providers.

We build upon and extend security primitives provided by a TEE to guarantee authenticity and integrity properties of applications, and to secure control of input and output devices used by these applications. More specifically, we can guarantee that if such an application produces an output, then this output can always be explained in terms of the application’s source code and the inputs it received. This is fundamentally different from how these applications were built in the past: traditionally, the security of a distributed application would rely on the security of the hardware and an enormous stack of software, including e.g., operating systems, communication stacks, and system libraries, all of which can be seen as attack surface. Authentic execution on TEEs mitigates this by isolating application components, at least with respect to their security properties, from the overall software stack that is required to operate a device and to facilitate communication. This is what we refer to with our claim of containing functionality in a small TCB: The security properties of an application depend on trusting a substantially reduced volume of software. Ideally, this TCB can be reduced to the processing hardware and the core application software. Experiments (see Noorman et al., Van Bulck et al., Mühlberg et al.) show that we are often able to rely on only 1% to 10% of the software volume to implement critical functionality securely.

Applications and Demonstrators

In "Sancus 2.0: A Low-Cost Security Architecture for IoT Devices" we outline a number of possible applications for Sancus-like technology. In the last years we have built a range of demonstrators and conducted feasibility studies to  illustrate these use cases. We have conducted extensive evaluation of the security and performance aspects of our approach; the prototypes show that Protected Module Architectures (PMAs) together with our programming model form a basis for powerful security architectures for dependable systems in domains such as Industrial Control Systems, the Internet of Things or Wireless Sensor Networks. Amongst our demonstrators are, for example, ideas to implement periodic inspection and trust assessment functionality for legacy IoT applications and proof-of-concept components for a secure smart metering infrastructure.


Demo setup for the VulCAN approach to secure automotive CAN networks. This demo has two dashboards, illustrating the integration of legacy car components in a secure environment. One side of the demo will react to attacks as a conventional, insecure car would do, the other side uses Sancus based software protection and attestation.

The video above shows our most comprehensive demonstrator, a secured vehicular control network. Specifically, we provide a generic design for efficient and standard compliant vehicular message authentication and software component attestation, named VulCAN. This demonstrator is based on the understanding that vehicular control networks, in particular the pervasive (beyond the automotive sector) CAN bus, provide no security mechanisms. Our approach advances the state-of-the-art by not only protecting against network attackers, but also against substantially stronger adversaries capable of arbitrary code execution on participating Electronic Control Units (ECUs). We demonstrate the feasibility and practicality of VulCAN by implementing and evaluating two previously proposed, industry standard-compliant message authentication protocols on top of Sancus. Our results show that strong, hardware-enforced security guarantees can be met with a minimal TCB without violating stringent real-time deadlines under benign conditions.

In our experience, practitioners have difficulties in understanding how enclaved execution can be leveraged, in particular in heterogeneous distributed networks. Over the past six years, our research group has gained significant experience with applications of Sancus and TEEs. To cover gaps in the understanding of these domains amongst software developers "in the wild", we have developed extensive tutorial material that explains how to build secure distributed applications along the lines of the authentic execution idea.



Wrapping Up

Security issues in computer systems are pervasive. So pervasive that media attention, even for attacks that affect millions of users, fades away within a few days. Up till now, most of these attacks have caused little more than financial damage and the compromise of personal data. Yet, recent incidents have shown that with the advent of connected cyber-physical systems, cyber attacks will, to an increasing extent, have physical consequences and can put the population at large at the risk of suffering physical harm, in particular when critical infrastructure is affected. To counter these risks, we must pervasively embrace security in our engineering efforts.

There are technological solutions to provide security for distributed applications such as control systems. In this blog post we summarise our work on open-source TEEs and the Sancus processor, which addresses security in low-end systems. The current incarnation of Sancus, version 2.1, is available on GitHub. We developed a research agenda for security extensions in processors, which leverage open-source concepts for the community to collectively develop a clear understanding of execution semantics and the resulting security implications. Here we envision Sancus to serve as an open-source research vehicle with limited complexity, which allows to address micro-architectural vulnerabilities in processors in a principled and step-by-step way. We argue that without such an understanding, regulatory and legal requirements regarding safety and security, but also privacy-related regulations such as the GDPR are hard to satisfy. Our ongoing research in this field focuses, e.g., on extending Sancus with provably resistant against side-channel attacks such as Nemesis. In a second line of research, we are exploring novel application domains for Trusted Computing technology in the context of distributed mixed-criticality systems with stringent real-time constraints. An up-to-date overview of our research activities and publications is available on the Sancus website.

Importantly, TEEs alone will not solve security: Any secure development process must embrace requirements analysis and threat modelling early to be effective and to advise the choice of appropriate technologies. Moreover, implementing security requires engineers at all levels of a system stack to understand the security implications of their choices of technology, to develop effective communication strategies to inform other levels of their assumptions, requirements and guarantees, and to be ready to adapt to change.

Beyond understanding and using the right technological basis for building secure systems, we believe that there is need for a proactive legislative approach, that combines a careful assessment of state of the art of protective technologies with a gradual increase of liability for software and hardware vendors for security and privacy incidents.

Sunday, May 19, 2019

Motivational Modelling

By: Leon Sterling [1] (@swinfict), Rachel Burrows [2] 

Associate Editor: Muneera Bano (@DrMuneeraBano)



We must give as much weight to the arousal of the emotions and to the expression of moral and esthetic values as we now give to science, to invention, to practical organization. One without the other is impotent. Lewis Mumford, Values for Survival, 1946


Now more than ever we are seeing a blurring of the lines between social sciences and software engineering. Software developed today incorporates and adapts to our values, attitudes, emotions, behaviours, amongst others. We need to improve our techniques for empirically reasoning about these concepts, and then ensure they are effectively addressed in the design.

Let us consider emotions. People tend to reject software that does not adequately support the way they wish to feel while interacting with it. Do existing software engineering techniques effectively translate emotional goals and requirements into design? We contend that requirements relating to emotions differ from traditional functional and non-functional requirements. Emotional goals, such as the goal of feeling empowered while interacting with software, is a property of a person and not of software. 

Emotional goals are inherently ambiguous, subjective, difficult to elicit, difficult to represent, difficult to address in design, and difficult to evaluate.  Existing artefacts that capture soft goals include use cases, personas, scenarios or cultural probes. However, these alone are still insufficient when designing for technology embedded within complex social situations.

For instance, our work in using electronic health records for self-managing health has shown that patients wanted to feel empowered, in control and resilient, while maintaining meaningful connections with family and carers. Current solutions fail to adequately address these emotional goals; citizens have been confronted with a platform which they refuse to trust with their personal data.

Emotions and Design


Great designers articulate emotional goals as higher-level objectives and try to align with the desires, needs and emotions of users. They are conveyed in brand values, marketing material and used to inform key design decisions. Hitting the right emotional tone is part of empathising with the customer and user --- a key step in design thinking.

Referring to emotions happens despite the lack of consensus in exactly what emotions are. Some believe in a hierarchy of emotions, building from basic emotions such as fear, anger or joy. Others believe that emotions are constructed concepts developed through life experience. We advocate for being able to address emotions as software requirements.

Motivational Modelling

Motivational modelling is a lightweight technique that has emerged from our research for expressing emotional requirements of technology engagement related to the goals to be achieved. Motivational modelling has now been successfully used in several industry projects including homelessness, teaching, healthcare and teleaudiology.



Figure 1: Photo of a goal elicitation workshop




Figure 2: Core icons used in motivational models.
Image credit: James George Marshall

In motivational modelling, three kinds of goals – dobe, and feel goals – are elicited alongside stakeholders and possible concerns. The image is from one of these goal elicitation workshops. Do goals describe what the system to be designed should do, be goals describe how the system should be, and feel goals or emotional goals describe how using the system should feel. The results of the requirements elicitation session(s) are converted into a hierarchically structured motivational goal model, which contributes a practical way of communicating visually and verbally the functional, quality and emotional goals that need to be addressed in the design of new technology for adoption. A tool for the conversion can be found at motivationalmodelling.com

Motivational models can subsequently be used throughout the design process to steer exploration, experimentation and evaluation strategies. The models created can be used as shared artefacts amongst software teams and non-technical stakeholders to ensure that the functional, quality and emotional goals of users are identified, upheld and advocated for throughout the software engineering process.

Key benefits of motivational modelling are:

Modelling the goals, desires and needs of stakeholders 

Technical and non-technical individuals can empathise with the end user and visualise their differences and dependencies. The model represents emotional goals intuitively. In our experience, that means the whole team buys into making the software emotionally relevant rather than just leaving it a responsibility of the UX team.

Sparking a conversation that leads to creative solutions 

New ideas are triggered through improved communication, collaboration and joint problem-solving.  Possessing design artefacts alone are not enough. The activities and deliberations that happen leading up to the finished artefact are equally important to build understanding and meaning.
Supporting teams to navigate and resolve the ambiguity in emotional goals 
Emotional goals are inherently ambiguous. It is instinctual to resolve this ambiguity early to reduce uncertainty in the project. In the case of emotional goals, it is important to maintain the abstract nature of the goal for longer, in order to progress towards a solution.


Motivational models are part of a longer-term agenda towards improving our ability to address socially-oriented requirements in software, and more generally to examine how we represent these concepts throughout the entire software development process. More information online [link]




[1] Centre for Design Innovation, Swinburne University of Technology, Australia
[2] PsyLab, The Bradfield Centre, Cambridge Science Park, Cambridge, UK

Tuesday, May 7, 2019

Towards Holistic Smart Cities



Authors: Schahram Dustdar (@dustdar), Stefan Nastić, Ognjen Šćekić

Associate Editor: Muneera Bano (@DrMuneeraBano)



Today’s Smart City developments can be summarized as ‘representatives smart’, as opposed to ‘collective-smart’ – one of the terms we propose for describing the future vision of cyber-human smart cities involving a rich and active interplay of different stakeholders (primarily citizens, local businesses and authorities), effectively transforming the currently passive stakeholders into active ecosystem actors.


Realizing such complex interplay requires a paradigm shift in how the physical infrastructure and people will be integrated and how they will interact. At the heart of this paradigm shift lies the merging of two technology and research domains – Cyber-physical Systems and Socio-technical Systems – into the value-driven context of a Smart City. The presented Smart City vision diverges from the traditional, hierarchical relationship between the society and ICT, in which the stakeholders are seen as passive users who exclusively capitalize on the technological advancements. Rather, the architecture we propose puts value generation at the top of the pyramid and relies on “city capital” to fuel the generation of novel values and enhancement of traditional ones. This effectively transforms the role and broadens the involvement and opportunities of citizen-stakeholders, but also promotes the ICT from passive infrastructure to an active participant shaping the ecosystem.

Architecture of Values: The fundamental idea behind a collective-smart city is the inclusion of all its stakeholders (authorities, businesses, citizens and organizations) in the active management of the city. This includes not only the management of the city’s infrastructure, but additionally the management of different societal and business aspects of everyday life. The scale and complexity of managing diverging individual stakeholder interests in the past was the principal reason for adopting a centralized city management model where elected representatives manage all aspects of the city’s life and development.

However, we believe that recent technological advances will enable us to share the so-far centralized decision-making and planning responsibilities directly with various stakeholders, allowing faster and better-tailored responses of the city to various stakeholder needs.

The key technological enabler for this process is the active and wide-scale use and interleaving of technologies and principles from the IoT and Social Computing domains in the urban city domain. These technologies form the basic level of the proposed architecture of values. They allow the city to interact bidirectionally with the citizens in their everyday living, working and transport environments using various IoT edge devices and sensors, but also to actively engage citizens and other stakeholders to perform concrete tasks in the physical world, express opinions and preferences, and make decisions. The “city” does not need to be an active part in this interaction. It can serve as a trustworthy mediator providing the physical and digital infrastructure and accepted coordination mechanisms facilitating self-organization of citizens into transient, ad hoc teams with common goals. This synergy, in turn, enables the creation of novel societal and business values.

Infrastructural values – This category includes and extends the benefits conventionally associated with the existing notion of Smart City – those related to the optimized management of shared (city-wide) infrastructure and resources. Traditionally, the management of such resources (e.g., transportation network and signalization, internet infrastructure, electricity grid) has been static and highly centralized. The new vision of a Smart City relies on the interplay of humans and the IoT-enabled infrastructure, enabling additional, dynamic, locally scoped infrastructural optimizations and interventions, e.g., optimization of physical and IT/digital infrastructure in domains such as computational resources, traffic or building management. Apart from existing static/planned optimizations (e.g., static synchronization of traffic lights), the dynamic optimizations of the infrastructure might include temporary traffic light regime changes when a car accident is detected.

Societal values – This novel value category arises through the direct inclusion and empowerment of citizens as key stakeholders of the city. The fact that through the use of incentivized/paid to perform specific tasks in both the digital and physical environments is a powerful concept bringing along a plethora of socially significant changes.

For example, while most cities function as representative democracies, significant local changes are often decided upon through direct democracy (referendums, initiatives). While undeniably fair in principle, one of the biggest obstacles to more frequent use of direct democracy is the under-informedness of voters. It has been shown that informing the citizens enables them to make more judicial and responsible decisions. The pervasiveness of IoT devices enables interaction with citizens directly and opens up the possibility of informing the citizens better, or even simulating in practice the outcomes of different election choices.




Tuesday, April 30, 2019

Citizen Engagement in Smart Cities: Theoretical Dreams vs Practical Reality


By: Muneera Bano (@DrMuneeraBanoand Didar Zowghi (@DidarZowghi) 




It has been predicted that by 2050, around two-thirds of the world’s population might be living in urban settlements. To make the cities ready for population expansion and growth, ICT is playing a critical role in the future of urbanisation referred to as ‘Smart City’. It has recently become the hot topic of research as the tech giants such as Google and Microsoft entering the race of real state.


There is no consensus on the exact definition of smart cities, however, any definition would refer to the core concepts of advanced technological infrastructure for urban society with collaborative and interactive human-centred design. An emerging view is that smart cities aim to increase efficiency, sustainability, and improve quality of life for citizen by utilizing technologies to connect every layer of a city, from the air to the streets to underground, to capture and analyse data from various independently-managed and operating infrastructures, utilities and service providers.


The buzz words used by researchers to propose architectural solutions for smart cities include Artificial Intelligence (AI), Internet of Things (IoT), Smart Phones, Cloud-based Services and Big Data. In essence, a smart city is a large-scale cyber-physical complex socio-technical system for an urban population that is comprised of many interconnected subsystems. Examples of these subsystems include transportation, power and water supply, waste management, pollution monitoring, crime detection, video surveillance, emergency response system and other smart community initiatives for e-governance. Typical examples of smart city are Singapore, Dubai, Amsterdam, Barcelona, Stockholm, and New York.

The three core components of a smart city are People, Processes and Technology. Regardless of the type of technology used for smart city implementation, the most emphasised factor is ‘Citizen Engagement’. As pointed out by Bettina Tratz-Ryan, research vice president at Gartner, "The way forward today is a community-driven, bottom-up approach where citizens are an integral part of designing and developing smart cities and not a top-down policy with city leaders focusing on technology platforms alone”.

Smart city design should not only allow the community of citizens to interact directly with the technology but increase their participation in the governance of the cities. However, relatively little research has focused on the complexities and pragmatics of citizen engagement leading to their participation in governance.

There are various stakeholders in the smart city and citizens are only one group of stakeholder. The intention for involving citizens in co-production and the evolution of the smart city is to turn them into a technologically intelligent community where collective human intelligence works in parallel to AI for maximum effectiveness. However, in practice, such form of citizen engagement (to the level of co-governance by society) has yet to be observed in real life examples.

The democratic concept of stakeholder involvement in system design is quite old and well established. Without careful consideration and management, involving stakeholders can cause issues rather than provide benefits. Smart city, being a complex, large-scale, cyber-physical, multi-faceted, multi-layered and socio-technical system, presents new challenges on how to involve and engage the right stakeholder (citizens).

The critical aspect of any smart city project is derived from the political, social and cultural values of the society. The design and infrastructure of a smart city, type of citizen engagement and its evolution will reflect the political system. Examples of such differences can be seen in the citizen engagement by Japan, in the Social Credit System by China, or the bio-microchip implementation by Sweden.

Whether citizen engagement is a democratic initiative (neo-humanist), where the technology is utilised to improve the life and environment of a city, or is it a step towards an increase in controlling the behavioural patterns of the citizens on politically acceptable values inherent in the governance layer of society (functionalist approach) or as simply phrased mass surveillance, the questions regarding citizen engagement such as who will be involved, why, when, how and how much, would all be answered within the political context and the paradigm of governance of a country.

There is a need for further research on various dimensions of citizen engagement not just from purely technological perspectives but also from a social perspective such as political, cultural, and ethical. It is one of the important aspects of smart city and lack of proper citizen engagement and fair representation of citizens from all walks of life can have serious repercussions. Lack of diverse representation can lead to biases in design, that can disadvantage the under-represented or underprivileged groups of citizens. Also, there is a possibility of increasing the digital divide that will impact the less technologically savvy population of cities.

Another crucial issue that requires attention is data protection and privacy.  Smart cities capture and manage large amounts of data that is extremely important for their operations. Any data loss will disrupt city operations and will impact citizen’s trust and confidence. Data collected and manipulated by Smart Cities solutions are critically sensitive for citizens, businesses, governmental, and emergency services, etc. To ensure compliance with data protection regulations such as GDPR, smart city architecture must include data protection as a critical requirement and must embed privacy protection in all stages of the data lifecycle.   

Tuesday, April 16, 2019

Microservice API Patterns - How to Structure Data Transfer Representations and Endpoints

Authors: Olaf Zimmermann, Uwe Zdun (@uwe_zdun), Mirko Stocker (@m_st), Cesare Pautasso (@pautasso), Daniel Lübke (@dluebke)

Associate Editor: Niko Mäkitalo (@nikkis)


The Microservice API Patterns at www.microservice-api-patterns.org distill proven solutions to recurring service interface design and specification problems such as finding well-fitting service granularities, promoting independence among services, or managing the evolution of a microservice API.


Motivation
It is hard to escape the term microservices these days. Much has been said about this rather advanced approach to system decomposition since James Lewis’ and Martin Fowler’s Microservices Blog Post from April 2014. For instance, IEEE Software devoted a magazine article, a two-part Insights interview (part 1part 2) and even an entire special theme issue to the topic.

Early adopters’ experiences suggest that service design requires particular attention if microservices are supposed to deliver on their promises:
  • How many service interfaces should be exposed?
  • Which service cuts let services and their clients deliver user value jointly, but couple them loosely?
  • How often do services and their clients interact to exchange data? How much and which data should be exchanged?
  • What are suitable message representation structures, and how do they change throughout service lifecycles?
  • How to agree on the meaning of message representations – and stick to these contracts in the long run?

The Microservice API Patterns (MAP) at www.microservice-api-patterns.org cover and organize this design space providing valuable guidance distilled from the experience of API design experts.

What makes service design hard (and interesting)?
An initial microservice API design and implementation for systems with a few API clients often seem easy at first glance. But a lot of interesting problems surface as systems grow larger, evolve, and get new or more clients:
  • Requirements diversity: The wants and needs of API clients differ from one another, and keep on changing. Providers have to decide whether they offer good-enough compromises or try to satisfy all clients’ requirements individually.
  • Design mismatches: What backend systems can do and how they are structured, might be different from what clients expect. These differences have to be dealt with during the API design.
  • Freedom to innovate: The desire to innovate and market dynamics such as competing API providers trying to catch up on each other lead to the need to change and evolve the API. However, publishing an API means giving up some control and thus limiting the freedom to change it.
  • Risk of change: Introducing changes may result in possibly incompatible evolution strategies going beyond what clients expect and are willing to accept.
  • Information hiding: Any data exposed in an API can be used by the clients, sometimes in unexpected ways. Poorly designed APIs leak service implementation secrets and let the provider lose its information advantage.

Such conflicting requirements and stakeholder concerns must be balanced at the API design level; here, many design trade-offs can be observed. For instance, data can be transferred in a few calls that carry lots of data back and forth, or alternatively, many chatty, fine-grained interactions can be used. Which choice is better in terms of performance, scalability, bandwidth consumption and evolvability? Should the API design focus on stable and standardized interfaces or rather focus on fast-changing and more specialized interfaces? Should state changes be reported via API calls or event streaming? Should commands and queries be separated?

All of these – and many related – design issues are hard to get right. It is also hard to oversee all relevant consequences of a design decision, for instance regarding trade-offs and interdependencies of different decisions.


Enter Microservice API Patterns (MAP)
Our Microservice API Patterns (MAP) focus – in contrast to existing design heuristics and patterns related to microservices – solely on microservice API design and evolution. The patterns have been mined from numerous public Web APIs as well as many application development and software integration projects the authors and their industry partners have been involved in.

MAP addresses the following questions, which also define several pattern categories:
  • The structure of messages and the message elements that play critical roles in the design of APIs. What is an adequate number of representation elements for request and response messages? How are these elements structured? How can they be grouped and annotated with supplemental usage information (metadata)?
  • The impact of message content on the quality of the API. How can an API provider achieve a certain level of quality of the offered API, while at the same time using its available resources in a cost-effective way? How can the quality tradeoffs be communicated and accounted for?
  • The responsibilities of API operations. Which is the architectural role played by each API endpoint and its operations? How do these roles and the resulting responsibilities impact microservice size and granularity?
  • API descriptions as a means for API governance and evolution over time. How to deal with lifecycle management concerns such as support periods and versioning? How to promote backward compatibility and communicate breaking changes? 

So far, we have presented ten patterns at EuroPLoP 2017 and EuroPLoP 2018; about 35 more candidate patterns are currently being worked on. The published patterns and supporting material are available on the MAP website that went live recently. The papers are available via this page.

Sample Patterns for Communicating and Improving Interface Quality
To illustrate MAP a bit further, we summarize five patterns on communicating and improving API qualities below. We also outline their main relationships.

Figure: Relationships between Selected Patterns for Communicating and Improving Interface Quality.

  • API Key: An API provider needs to identify the communication participant it receives a message from to decide if that message actually originates from a registered, valid customer or some unknown client. A unique, provider-allocated API Keyper client to be included in each request allows the provider to identify and authenticate its clients. This pattern is mainly concerned with the quality attribute security.
  • Wish List: Performance requirements and bandwidth limitations might dictate a parsimonious conversation between the provider and the client. Providers may offer rather rich data sets in their response messages, but not all clients might need all of this information all the time. A Wish List allows the client to request only the attributes in a response data set that it is interested in. This pattern addresses qualities such as accuracy of the information needed by the consumer, response time, and performance, i.e., the processing power required to answer a request.
  • Rate Limit: Having identified its clients, an authenticated client could use excessively many resources, thus negatively impacting the service for other clients. To limit such abuse, a Rate Limit can be employed to restrain certain clients. The client can stick to its Rate Limit by avoiding unnecessary calls to the API. This pattern is concerned with the quality attributes of reliabilityperformance, and economic viability.
  • Rate PlanIf the service is paid for or follows a freemium model, the provider needs to come up with one or more pricing schemes. The most common variations are a simple flat-rate subscription or a more elaborate consumption-based pricing scheme, explored in the Rate Plan pattern. This pattern mainly addresses the commercialization aspect of an API.
  • Service Level Agreement: API providers want to deliver high-quality services while at the same time using their available resources economically. The resulting compromise is expressed in a provider’s Service Level Agreement(SLA) by the targeted service level objectives and associated penalties (including reporting procedures). This pattern is concerned with the communication of any quality attribute between API providers and clients. Availability is an example of a quality that is often expressed in such an SLA.

More patterns and pattern relationships can be explored at www.microservice-api-patterns.org. In addition to the patterns, you find there additional entry points such as a cheat sheet and various pattern filters such as patterns by force, and patterns by scope (phase/role).


Wrapping Up

Microservice API Patterns (MAP) is a volunteer project focused on the design and evolution of Microservice APIs. We hope you find the intermediate results of our ongoing efforts useful. They are available at www.microservice-api-patterns.org – we will be glad to hear about your feedback and constructive criticism. We also welcome contributions such as pointers to known uses or war stories in which you have seen some of the patterns in action.

The patterns in MAP aim at sharing timeless knowledge on distributed system APIs. While trends like microservices come and go, the fundamental design problems of exposing remote APIs will not go out of fashion any time soon!