Monday, June 24, 2019

Can AI be Decolonial?

By: Asma Mansoor

Associate Editor: Muneera Bano (@DrMuneeraBano)


In a world marked by economic, racial and gender-based hierarchies, can AI be decolonial?

If not, can it become decolonial? 


These questions might elicit criticism since computing and its associated fields are generally assumed to be democratic in flavour, working in a realm where constructs such as race and gender are thought to be reduced to irrelevant abstractions. But it is precisely this reduction that I find problematic specifically in a world in which many regions are still experiencing a colonial hangover in the form of neocolonial exploitation. This exploitation, galvanized by various Capitalist corporate structures, manifests itself via technological interventions, such as surveillance and drone technology, biotechnology and the abuse and degradation of indigenous environments in the garb of progress. Since the fifteenth century onwards, European colonization has been supplemented by technological advancements which have helped consolidate the various Others of the West. As cyberspace expands and AI becomes more autonomous, what is gradually becoming a matter of concern for numerous people living in the Global South like myself, are the possible colonial implications of these advancements. Our fears are not unfounded. The CIA’s Weeping Angel program, that permitted the installation of spying software on smart TVs, was sanctioned for devices headed to countries suspected of harbouring and supporting terrorism. This reflects how surveillance technologies are operating as tools of Othering in the hands of Euro-American power structures, inferiorizing peoples and countries. Technology in all its forms is helping supra-national Capitalist conglomerates to become increasingly colonial as they impose their sovereign rights to regulate and manipulate the technology that they ration out to states and groups as we saw in the case of Facebook. So to question whether AI, as a component of this technological colonization, can be decolonial becomes a rather loaded question which cannot be answered in a simple manner.


What I imply by decoloniality is not an end of colonization, per se. I take it in the connotations of Walter Mignolo who defines decoloniality as a non-hierarchical inter-epistemological exchange that encourages epistemic disobedience and delinking from its colonial epistemologies in order to build a world where many worlds can exist in a state of non-hierarchical epistemic osmosis. However, our world is also an age of the Empire, where the Empire, according to Michael Hardt and Antonio Negri, is the sovereign power that regulates global exchanges. As opposed to the decolonial ethos which advocates a cross-cultural exchange of knowledge without centralizing any mode of thinking, this Empire also encourages this decentered osmosis, at least in theory if not in practice. What makes the operations of this global Empire different from decolonial politics is that the Empire upholds its epistemic sovereignty and cannot afford to decentralize its economic, technological and intellectual supremacy. Computing and AI are vital components in this global regulatory apparatus.  

Therefore, I believe that at the present moment in time, AI is not decolonial unless the formerly colonized appropriate it for their interests, a task which I am convinced is fraught with obstacles. AI responds to the master because it is programmed by the master who needs to uphold global hierarchies and inequalities. It operates as the Golem in the hands of the Global Capitalist masters, ensuring on their part who is to be excluded and who is to be included and the extent to which they are to be included. 

Biases are encoded within its very algorithmic genes as the works of Safiya Umoja Noble and David Beer indicate. It inherits the aesthetic biases of its makers, including those governing the perceptions of race and gender. An international beauty contest judged by AI machines in 2016 revealed that these machines did not consider dark skin as beautiful. The driverless cars are more likely to hit people with darker skin. AI-based voice assistants have been reported to respond less to different accents or the voices of women.

Like Macaulay’s Minute Men,  AI is also a product of colonial mentality. It does not only absorb the colonisers’ ways of knowing but also the prescriptions of bodily aesthetics. However, at the current moment in time, AI is better than Macaulay’s Minute Men who experienced displaced and schismatic identities in their effort to become like the Masters. The AI, at present, is not aware of these complexes. Perhaps, in a few years, as it gains sentience, AI would develop similar complexes in its efforts to become more human. At the moment, it is fully complicit with the neocolonial agenda wherein all Others are equal but some Others are more Other than Others. It keeps an eye on rogue elements, further marginalizing those who are already marginalized. It is not decolonial precisely because it is supplementing the hierarchies that decoloniality sets out to dismantle. 

So what needs to be done?  Perhaps, a more acute awareness of what goes into its programing needs to be rethought and that can be done by taking on board, social, philosophical and literary theorists. Perhaps then can the decolonization of AI truly begin.

Sunday, June 9, 2019

Self-awareness and autonomy in pervasive computing systems


Associate editor: Danilo Pianini (@DanySK86)

The number of computing systems in our environment is constantly increasing, and it’s not just the obvious individual devices we have all added to our lives. This trend is accelerating even further due to unseen advances in the areas of pervasive computing, cyber-physical systems, the Internet-of-Things, Industry 4.0, as they manifest in smart cities, smart homes, smart offices, and smart transport. The numbers alone make centralized control problematic from an engineering point of view, even without considering the speed of dissemination and adoption. The vast and unmeasured diversity of interfaces and interactional requirements are imposing an as-yet-unmeasured increase in cognitive and physiological demands on all of us. One way to lessen the impacts of these human and technological demands is by offloading some control to some of the individual devices. This not only relieves demands on miniaturization, control systems, and server infrastructures, but also relieves cognitive and physiological demands on the users, and allows the devices to react more quickly to new situations, and even to known or anticipated situations that unfold more rapidly than current hierarchical control systems can accommodate.
One approach to imbuing individual devices with more autonomy is to design them to be self-aware. This would enable devices to learnabout themselves and their environment, to develop and refine thesemodels during runtime, and to reason about them in order to makeprofound decisions. Different levels of self-awareness have been proposed, addressing the various degrees to which a computational system can be aware. It has been demonstrated that this can improvesystem performance, even when collaborating with others.
We offer an outline of three important factors that have the potential to challenge the success of collaborating self-aware systems.



Situatedness

Systems distributed in a real-world environment will perceive that environment differently, even when their abilities to perceive it are equal and they are in close proximity to one another. The following figure depicts a network of 3 smart-cameras, able to perceive their environment and process this information locally.


This network illustrates two problems with respect to situatedness of individual devices. Camera A and B are physically very close, mounted on a common pole. However, due to their constrained perception of the world, they cannot perceive the same objects at the same time. On the other hand, cameras C is mounted on a house and observes the same area as camera B but from a different perspective, which means that their individual perceptions of a simultaneously viewed object can be different. Figure 1 shows us that, while camera B sees a smooth round object that is mostly green, camera C observes an object of non-uniform shape, that is mostly red. Even if they share their information, they would need to also share an understanding of their differing perspectives in order to combine their perceptions and recognize that they are seeing the same object.

Heterogeneity

When operating alongside or in collaboration with others, a system might not be able to simply make assumptions about the abilities and behavior of another system. As an example, please consider two digital cameras that both perceive their environment. Even though these two cameras may observe the same object in the same way, their perceptual tools may differ, and this could conceivably result in completely different perceptions of the same object. One might imagine a black-and-white sensor and a standard color sensor in the two cameras. Here the cameras cannot simply exchange color information about objects as this would not result in a common understanding. In a similar case, different zooms can lead to different resolutions permitting a camera to perceive details another camera might not be able to see.

Individuality

Systems are often designed to perform very specific tasks. If they are intended to collaborate with others, this collaboration is usually clearly defined at the time of their design. If we want future systems to be able to establish collaboration autonomously, without a priori knowledge of their potential collaborators, we will have to build them with the ability to model the potential collaborators that they encounter. In addition, they have to be able to model the behavior of those new collaborators and adapt their own behavior according to larger collaborative models that were developed on the fly.

Conclusion

Current work on self-aware systems focusses on the individual computing systems, rather than on defining, designing, and developing features that would enable and improve heterogenous collaboration during runtime. In order to facilitate collaboration among systems, we have proposed additional levels of networked self-awareness [1]. Implementing these additional levels of networked self-awareness will enable systems to develop adaptable models of their environment, of other systems, and of themselves, as well as the ways in which those models interact and impact one another. Such models should be able to meet the challenges outlined above, and collaborate with other systems in achieving their shared and unshared goals.

References

  1. L. Esterle and J. N. Brown, "I Think Therefore You Are: Models for Interaction in Collectives of Self-Aware Cyber-physical Systems," Transactions on Cyber-physical Systems, under review, p. 24, 2019.