Content-type: text/html

Downes.ca ~ Stephen's Web ~ Questions and Answers on AI Regulation

Stephen Downes

Knowledge, Learning, Community

Half an Hour, Jun 05, 2023

View full size


Suchith Anand poses some interesting questions on the OPENED SIG mailing list, and though I am not the British government agency to which the questions are addressed, I have thoughts...

 

  • What are the main human rights issues in the digital age? 

They are to my mind the same human rights issues as existed before the digital age, and which are addressed quite well (still) in the Universal Declaration of Human Rights. These include especially the idea that "all human beings are born free and equal in dignity and rights," and everything else that follows from this.

As a human society we have not done so well in establishing these rights. Most countries fail to recognize one or more of them, some countries recognize almost none of them, and even established democracies with a history of recognizing most rights have been backsliding in recent years. 

We could try to establish an additional charter of rights, as I did back in 1999, but all of these rights - access, communication, freedom of speech, personal privacy, ownership of their own information, etc. - are essentially derived from the set of basic rights enshrined in the Declaration. 

  • What are the ethical implications of surveillance capitalism?  For example ,  automated tools being used to monitor and manage workers?

I think that in the long run it won't be the surveillance that matters so much as what use the information is put to. I have several reasons for thinking this.

First, and probably most significantly, there are valid reasons for a lot of surveillance. With eight billion people (and counting) on the planet, traditional policing becomes prohibitively expensive and unreliable. For many people, the most significant deterrence to crime - everything from fraud to extortion to murder -  is the knowledge that they will be caught. People may not like speed cameras, but they do prevent speeding, and are far less expensive than two cops in a car at a speed trap.

Second, surveillance also helps ensure a more even-handed application of justice. The bias of police with regard to minorities and less well-respected people is widely known. Thanks to widespread surveillance - everything from people recording video on their phones to dashboard cameras - this bias has been detected, shown to people, and to some (small) degree, addressed. The same is true for digital crime; with sufficient surveillance we can look forward to the day corporate tax evasion receives as much enforcement as welfare fraud. 

Third, to a large degree, it's a lot easier to allow surveillance than to prevent it. Here I'm not just talking about employers using keyloggers and such, but people on the street with cameras, banks tracking financial transactions, location services being employed on your smartphone, etc. It's hard to overestimate how much enforcement, tracking, and, um, surveillance is required to support any sort of ban on surveillance.

That's why I focus much more on what we do with the information. And, in a nutshell, the argument is: if we have sufficient recognition and enforcement of our rights, then to a large degree surveillance doesn't matter. For example, suppose it doesn't matter whether you're gay, or whether or not you're organizing a union, or whether or not you're a Sufi. Then it doesn't matter whether information systems detect these facts.

Now I'm not naive; I know that a lot of discrimination exists, and that rights are abused all the time, and that this is enabled by surveillance. But the problem isn't the surveillance (which would just as often reveal cases where rights are being violated). The problem is the violation of human rights.

I also recognize that people have a right to expect a certain degree of privacy. People have secrets - their bank account numbers, their lists of lovers, their future employment plans, etc. - and have every right to keep them. These will no doubt be detected by some surveillance systems (just as today they can be detected when you look at their mail, overhear a conversation, listen to a gossip, etc.). But having the information does not give someone the right to share the information. I think we can have privacy and surveillance.

  • How are basic human rights (as defined by the UN) embedded into current data legislation? Is there more to do?

This will vary from country to country, but I would suspect that it is not well embedded at all, simply because the state of data legislation, where it exists at all, is nascent.

But - frankly - it would be very easy to embed it. We could, for example, preface every instance of data legislation with the caveat that "nothing in this legislation supersedes the Universal Declaration of Human Rights, and all implementations of this legislation must be consistent with the Universal Declaration of Human Rights." In fact, we could do that with all legislation, and with national constitutions in general.

But we don't. And this points to the deeper problem: a lot of people (including especially people in power) don't want to support and enforce human rights. It's the same problem as with surveillance. There are bad actors out there, and they will use whatever tools they can to violate our rights. And one such tool is national legislation, and they make very sure that universal human rights are not recognized or enforced in law.

AI regulations won't fix this. They just create another body of legislation that can entrench existing power and discriminatory practices. To a large degree, that's the primary reason why people like Sam Altman are calling for AI regulations (so long as it's him, not the EU, doing it). It's to create what's called 'regulatory capture' which will in fact ensure that human rights are not protected in these domains.

  • Who owns our data? Who benefits from our data?

So of course, the way the question is phrased, it's "our data", therefore, presumably, we own it. But there are some problems with that way of thinking about it.

 

First, in some jurisdictions, data cannot be copyrighted. That means that a host of basic facts about ourselves are not owned by anyone. It was not so long ago that everyone's name, address and telephone number was listed in a book that was actually distributed to everyone (though you could pay to be unlisted, but you could pay more to access unlisted numbers).

 
In Europe, of course, the question of ownership, and rights over the use and distribution of data, are exhaustively detailed in the General Data Protection Regulation (GDPR), a body of law that has received a lot of criticism, but which overall represents the best attempt to date to untangle the complex relationship between data, privacy, and society. Presumably the best advice would be for the U.K. to simply adopt the GDPR, but I understand ther`e are dissenting voices. 

  • How we might avoid the pitfalls of feudalism and colonialism in this new and constantly evolving landscape.

It's interesting to see the question phrased this way. These has been a lot of discussion in recent years about digital colonialism, but rather less about digital feudalism, though of course the two have been historically entwined. It's easy and wrong to say that every non-Indigenous person living on Indigenous land is a colonizer, but is it fair and accurate to call the descendants of slaves colonizers? Or the descendants of indentured servants? Or people who were impressed into the service of the Royal Navy but who jumped ship in Florida? Or the descendants of Irish farmers who were forced from their land and put on ships very much against their will?

 

None of this in any way is intended to exonerate the colonizers or to diminish in any way the long struggle by Indigenous people to reclaim their rights, their land and their heritage. But it is to make clear that there are rank injustices within colonizer communities, that these have a basis in feudalism, and continue to exist to this day through the preservation of power and privilege of a hereditary elite (or of an elite who wrested that power for themselves through military means).

 

We could draw some lessons from the current Ukraine conflict. As the saying goes, if Russia stops fighting, the war stops and Ukrainians get their land back. If Ukraine stops fighting, then Ukraine ceases to exist. Indigenous nations have never stopped fighting because they knew that if they did, they would cease to exist. But like the Ukrainians, their fight is less with the people as a whole than with the powerful elite pursuing (and still pursuing, to this day) an agenda of assimilation and colonization. That's what I would define as 'the pitfalls' the author is referring to.

 

Having said all this, I think societies have begun to map out something like a process. It begins with a process of truth, justice and reconciliation. And it involves ceding genuine and reasonable power to those who have been colonized. This shows up in several ways. One way - as I've already described above - is to ensure full recognition, entrenchment and enforcement of universal rights, for everyone. There can't be a tyranny of the majority ready to deny these rights at the drop of a hat or to bury the removal of these rights in domain-specific law (such as AI regulation).

 

Another way is to put actual power into the hands of the traditionally disenfranchised. This includes but is not limited to things like the means of production, social power, political power, technological power, and more. That is the intent of the 2nd amendment (the 'right to bear arms') as it is presently interpreted, though let's not be under any illusion that the colonizers of the late 1700s had any intention of actually granting military power to the people. I would also add that the 2nd Amendment is a chimera - it doesn't grant any actual power, just the illusion of it, which would be removed as soon as it was used with any intent).

 

'Actual power' in today's world isn't military; it's wealth and technology (as, again, the Ukrainians know, which is why they keep asking for it). In a digital world, power is acquired by means of proprietary technology and content, regulatory capture, illegal (or barely legal) business methods, and a range of other illegitimate business practices. AI regulation won't fix this, but there are fixes, through it would require a broad social transformation  to implement them. In the meantime, we can pretty much count on AI being used in the service of colonization and feudalism, just as pretty much everything else in society is.

 

This is actually pretty much a life-and-death question, because on a global scale, we've all been colonized by these feudal elites, who are degrading the entire environment for their own personal wealth and power, and who will leave us to die while they make their escape. 

  • How do we make sure there are proper governance mechanisms in place to embed values in all data ecosystems to protect human rights in the platform economy? 

There is this presumption that there is a set of 'values' that we all share. I have argued against this at length and in detail (sure, nobody reads it, but the argument exists)). Contrast the philosophies of Ubuntu, Chinese communism, contemporary capitalism, North American Spiritualism, or Maori PÅ«take (I'd link to these but you can look them up). Compare the very different paths prescribed by consequentialist ethics, deontology, and the ethics of care. 

 

The problem with embedding values in data ecosystems is that they entrench the very values of the collonizers and the feudalists we are trying to mitigate against.

 

No, what we should be trying to do is to devise mechanisms to negotiate values, to ensure that no one set of values can prevail and be enforced over all others. That does not entail an ethical free-for-all, but it does mean that the question of values in technology is a lot more complex than is assumed by the many many statements of ethical principles in AI.

  • What steps are there to mitigate the damaging effects of algorithmic biases?

As mentioned above, actually recognizing and enforcing universal human rights would be a start.

 

But as well, not using biased data and algorithms would also help. 

 

I don't think this is something that can be legislated, not simply because it would be almost impossible to enforce compliance, and not only because it is nearly impossible to define what constitutes non-biased data and algorithms, but also because the only effective change here is cultural change.

 

We can draw lessons from other fields. In journalism, for example, there is a desire to be truthful and objective, and therefore to eschew biased reporting and journalistic practices.e has also been a historic desire to mitigate the effects of those who ignore these principles, from the Hearst media empire of the early 1900s to the Fox News of today. We know that laws and regulations don't work on matters of ethics. We know that these institutions just walk though ethical codes. They are, in both cases, the feudalists, and they just don't care.

 

So how to respond? Consider another case: that of science generally. Now it's true that there remains a lot of malpractice in science, especially in our field of education, but the response in science is to not take their conclusions seriously, to not trust those who ignore proper evidence and research, and to not implement whatever results. That's why we didn't build huge cold fusion laboratories. That's why phrenology is not taken seriously.

 

It should be the same in the case of AI. We know, for example, that chatGPT is based on an incolmplete and biased data set - not as bad as Microsoft's Tay, but bad enough. We should never use sites like Twitter as a data source. And - most of all - we should ridicule and make fun of those who use these systems for real things (like the lawyer who used chatGPT to conduct legal research). People who use biased data and algorithms should not be taken seriously as professionals.

 

We could write a law, I suppose. But nothing would weigh more than our own response as a culture.

 

Also (and I mention this in my ethics course): we could all practice being less biased and prejudiced in our day-to-day discourse. After all, AI learns from us. It's a tall order, I know. But if we want unbioased AI, we have to be an unbiased people.

  • What governance policies and frameworks are needed to make sure digital technologies and opportunities are available for everyone and there are no monopolies created?  

Antitrust laws?

 

I mean, we know how to do this. But we don't have the legislative capacity to actually do what we know needs to be done. 

 

Failing the obvious response, then the best and only response is to ensure that nobody owns AI (which could be extended to include the idea that nobody owns the data). Once somebody owns this, it's theirs, and in today's society we are already most of the way to technological monopolies.

 

So I support things like open source software, open data, open networks, etc. But I caution - a lot of the legislative solutions proposed in some of the items above (anti-surveillance laws, for example, or laws that enforce non-biased AI) make open source too expensive to create. The more regulated a domain, the harder it is to create open source. For this reason, a lot of existing open source and open content initiatives live on the fringes of the law. So long as we have legislation that protects commercial interests (in the guise of protecting society) we risk consolidating the domain into a few monopolistic entities.

  • How do we make sure there are proper governance mechanisms in place to embed values in all data ecosystems? 

 See my response to the previous version of this question about 'embedding values', above.

  • Whether, and why, it might be time for data ethics to become an integral part of the education curriculum in universities.

It is possible for most people graduating university to take zero ethics courses, or if required, to take a watered-down down course such as 'business ethics'.  I have seen no real desire on the part of anyone (other than actual ethicists) to see ethics properly so-called taught as a part of the regular curriculum.

 

There is, I think, a very good reason for this. In real ethics courses (such as the ones I've taught) students are taught to ask questions, consider alternatives, weight different sets of values, and draw their own conclusions. Some students emerge supporting a form of virtue ethics, others hold on to traditional religious ethical values, and others become ethical relativists.

 

Nobody supporting the idea of (say) "data ethics to become an integral part of the education curriculum in universities" wants that. What they want is for there to be an eat set of ethical principles that 'everybody' (in the profession) supports, and for these to be taught as though they were laws or legal principles. And to be sure, there's a lot of legal positivism in such courses. It doesn't matter whether the law is ethical, or reasonable, or anything else. What matters is that the law exists.

 

If we really supported the idea of universal human rights, children would learn about ethics at a young age, so they could make their own decisions, rather that being indoctrinated into whatever ethical tradition happens to prevail in the geographic area where they have been raised. But I recognize that this (today) is a fantasy.

  • How can we build capacity for Data Ethics in universities and industry.

Why just universities and industry? Is it because this is where power is perceived to reside?

 

My own observation is that neither universities nor industry are structurally capable of being ethical. At best, there is the possibility that there might be some ethical people in these institutions. But the institutional values (such as they are) are not based on ethics. For universities, the core values are knowledge and elitism (or as some universities euphemistically maintain, 'excellence'). For industry, of course, it's money. 

 

It's worth noting that proponents of both academic and industrial institutions reverse the argument and make the claim that their values are what define 'ethical'. Hence, for example, we see in the debate surrounding AI the university professors coalesce around things like 'academic integrity' and 'plagiarism' while industry is more interested in things like 'copyright' and 'ownership'. We don't ever see the discussion turn to questions of human rights because human rights are not core ethical values in these institutions.

 

So the only way to build ethical capacity in universities and industry is to entrench and enforce the rights of ethical people in these institutions. Currently, a person's first loyalty is expected to be to their employer (and in industry, there is a fiduciary responsibility to shareholders). Strong and enforceable labour legislation should counter this, so people can adhere to their own values, and not those of their employer (which is what most of us will do in any case, unless actually threatened with poverty otherwise).

 

These are challenging times. As our society becomes increasingly complex, the urge for simple solutions becomes less and less reliable. Any form of authoritarianism, well-intentioned or not, based on principles of law rather than society, will fail. We can't just create codes or legislation to address massive ethical and cultural issues. We have to as a people to become at least as sophisticated as the AI systems we are trying to comprehend. We have to create the means and the mechanisms for each of us to learn from those around us, and then to craft a form of governance that respects the decisions we make. And in the end, if we fail to manage the challenges and opportunities that AI affords, the chains that bind us will be chains of our own making.

 

Image: Forbes/Getty.

 

Mentions

- JISCMail - OPENEDSIG Archives, Jun 05, 2023
, - , Jun 05, 2023
, - , Jun 05, 2023
, - , Jun 05, 2023
, - OpenAI says it could ‘cease operating’ in the EU if it can’t comply with future regulation - The Verge, Jun 05, 2023
, - Regulatory capture - Wikipedia, May 15, 2024
, - Copyright and Licensing for Data - Scholarly Communications Office - Emory University, Jun 05, 2023
, - , Jun 05, 2023
, - , Jun 05, 2023
, - The super-rich ‘preppers’ planning to save themselves from the apocalypse | The super-rich | The Guardian, Jun 05, 2023
, - , Sept 27, 2024
, - Ethical Codes , Jun 05, 2023
, - , Jun 05, 2023
, - , Jun 05, 2023
, - Tay (chatbot) - Wikipedia, Jun 05, 2023
, - A lawyer used ChatGPT to cite bogus cases. What are the ethics? | Reuters, Jun 05, 2023
, - Legal Positivism (Stanford Encyclopedia of Philosophy) , Jun 05, 2023
, - Big Tech Is Making A Massive Bet On AI … Here’s How Investors Can, Too, Jun 05, 2023
, - Questions and Answers on AI Regulation, Jun 06, 2023



Stephen Downes Stephen Downes, Casselman, Canada
[email protected]

Copyright 2024
Last Updated: Dec 12, 2024 8:10 p.m.

Canadian Flag Creative Commons License.

Force:yes