On AI and Corporate Social Responsibility

Page content

This morning I went for a run. During this run I listened to Systems Crash and the discussion about AI, about the US attitude, and the European/International attitude and it convinced me that I much prefer to use EU/International projects rather than American ones for a simple reason. Corporate Social Responsibility.

The US wants to move fast and break things, including the law, by hoovering data it has no moral rights to. In contrast, from what I gather, from glimpsing at the Elysée documents about Macron’s projects for AI there is a push for collaboration,for helping humanity and more.

If the US wants to move fast, and break things, and impose it’s supremacy then we have an easy choice. That is not to use their AI solutions. If the Writer’s Guild in the US protested for months then it makes no sense for the US to continue down the road it is taking, because instead of going at a pace that people find comfortable they are burning collaborative bridges.

Years ago we were discouraged to used VK because it is Russia owned, and more recently we were discouraged from using TikTok because it is China owned, but in the age of Trump as president, by the same logic, we should avoid all US controlled websites and services, in favour of EU, African, South American, Asian, and Australian alternatives. The order is not political, This is the order that was in one of the press statements.

The IB and International Morality

When you study in an International school and work within the UN system you develop certain values of diversity, equality and inclusivity, but specifically you take the Universal Declaration of Human Rights seriously, and you try, as best as you can to live by those values.

Within this context AI, even at the development stage should follow those values, as well as respect Corporate Social Responsibility. (CSR) It is because OpenAI did not follow due process that writers had strikes and that people are protesting against their data being used to train AI, via opt out provisions, rather than Opt in. We should opt in to having our data used to train AI, not vice versa.

The British Broadcasting corporation recently did some research where they found that:

  • 51% of all AI answers to questions about the news were judged to have significant issues of some form

  • 19% of AI answers which cited BBC content introduced factual errors – incorrect factual statements, numbers and dates

  • 13% of the quotes sourced from BBC articles were either altered or didn’t actually exist in that article

Within this context uncensoring AI would be a grave mistake because unless people have studied media literacy, theory of knowledge, or morality and ethics, and unless they have some background on specific topics they will stumble into potential indoctrination. That’s why, at least for now, AI should be “censored” and in general follow CSR guidelines,

And Finally

Le Chat by Mistral and options of AI provided by Infomaniak provide me with EU based AI solutions. I do not need to use solutions from the United States. I have a choice, I can use the options that reflect my values.

Although the “Move fast and break things” phrase is popular I think it is tone deaf to what people actually want. I feel that Macron’s goals are more realistic, and more in tune with progressing at a pace that people find acceptable.If we want to give some editorial control to AI it has to reflect our values and norms. In so doing the push back to new technology will be lessened.