The Morality of Technology
Technology is neither good nor bad. It’s what you do with it.
Science can only ascertain what is, but not what should be. Outside of its domain, value judgments of all kinds remain necessary. — Albert Einstein
Some technologies have an implicit morality. Intercontinental ballistic missiles (ICBMs), for instance, are quite obviously engineered to kill. While it is conceivable that they could be redeployed for a more benign purpose (science fiction tropes suggest that they can be used to destroy meteors headed towards earth), there is little doubt that their primary purpose is malevolent.
But ICBMs are, in this aspect, unusual. All technologies are, for the most part, value neutral. They show us how to do things but leave it to us to determine how we use that knowledge.
Take the example of CRISPR-Cas9, the revolutionary gene editing technology I wrote about in a previous column. CRISPR allows us to edit genetic code by replacing specific DNA sequences with others. Companies like Memphis Meats, that produce meat from animal cells without slaughtering real animals, will most likely want to use this tech to augment their existing offerings by genetically engineering suitable traits into their products. Similarly, biotech companies will want to use CRISPR to treat cancer by cutting out corrupted, disease-inducing genetic code and replacing it with normal untainted genetic sequences. That said, CRISPR could, just as easily, be used for less noble purposes. For instance, wealthy parents could use it to produce designer babies — children whose embryonic DNA has been edited to match their specifications of eye and hair colour or the level of their IQ or athletic prowess. On a more ominous note, CRISPR could be used to weaponize common disease causing bacteria by altering their DNA so that they reproduce more rapidly and are resistant to cure.
Another example of the two sides of technology is drones — a modern technology that is already being deployed widely — from the delivery of groceries to ensuring that life saving equipment reaches first responders in high density urban areas. But for every beneficent use of drone tech, there are an equal number of dubious uses that challenge our ethical boundaries. Foremost among these is development of AI-powered killer drones — autonomous flying weapons intelligent enough to accurately distinguish between friend and foe and then, autonomously, take the decision to execute a kill.
This duality is inherent in all of tech. But just because technology can be used for evil, that should not, of itself, be a reason not to use it. We need new technology to better ourselves and the world we live in — and we need to be wise about how we apply it so that our use remains consistent with the basic morality inherent in modern society. This implies that each time we make a technological breakthrough we must assess afresh, the contexts within which they could present themselves and the uses to which they should (and should not) be put. If required, we must take the trouble to re-draw our moral boundaries, establishing the limits within which they must be constrained.
India is currently at the tail end of a massive identity project — that has been designed at a scale that is unprecedented in the world. It has deployed a technology solution capable of providing each and every resident of this country with a unique identity using their biometric information to distinguish them. To vast sections of our population, this represents an opportunity to avail facilities they have so far been denied in the absence of verifiable identity. At the same time, there is a legitimate fear that this identity technology will open us all up to discrimination, prejudice and the risk of identity theft.
As with all technologies, Aadhaar itself, is inherently neutral. The identity solution it offers could, on the one hand, bring into the formal system, millions of people who have so far been excluded, but on the other hand, if compromised, the vast database it has generated along with the ecosystem of associated datasets that have sprung up around it, could cause widespread harm. As always, the problem lies, not so much with the technology itself but in the legal, ethical and moral boundaries that we establish for how it should be used. Data is the new currency and our identity technology represents a tremendous opportunity for gain. In the absence of boundary conditions, there will be no limits on the innovativeness with which the technology will be used to extract benefit. Aadhaar has given us the tools to harness data in large volumes. If used wisely this technology can transform the nation. If not, it can cause us untold harm. We need to be prepared for the impending flood of data — we need to build dams, sluice gates and canals in its path so that we can guide its flow to our benefit.
If we do not, it will submerge us completely.
This article was first published in The Mint under a column called Ex Machina on technology, law and everything in between.