It’s been a while since I wrote a blog post, was neck deep in project delivery, this topic forced me to make the effort to extricate myself.
Should AI be regulated as Tech moguls such as the OpenAI CEO
asked for?
My simple answer is NO.
Please read on before flaming…
1 Virtue Signalling
There can’t be a better example of virtue signalling (1)
than that, can there?
Think of most mafia type movies when the mafia sides with
the police; more often than not it is to control a threat to the mafia itself.
I am not saying the ‘insiders’ calling for regulation are like
the mafia; but I do have some experience of working with an incredibly smart
technical guy, who solves technical puzzle after technical puzzle and whose
baby has recently been purchased and he is now able to make his mark on the
world. Over beers we were discussing why would he want to (at that time, a
decade ago) create and agent that automates marketing, puts it in the hands of
totally non technical people, and killing the jobs of fellow technical people.
His answer was simple: it can be done, and if I don’t, someone else will. He
agreed that sometimes it will go wrong, but that’s about implementation.
I believe that the tech moguls also think similarly. Now that
they have opened Pandora's box, the want to quickly claim that what’s coming
out is not our fault, please sit on the lid.
The message is “we are using AI responsibly, but we cannot
comment on how others will use it. So please protect us from evil”.
To some degree, on top of being virtue signalling, this can
also be seen as being anti-competitive. Hey, and you had the impression that
the USA was all about competition?
2 Anti Competitive
Just look back at the TikTok debate that went all the was to
congressional hearings. I am not into TikTok, and have not been following
closely, but form what I have read, TikTok’s defence, if I can be so rough as
to summarise in 2 main points:
- we are not affiliated to the Chinese government, so please do not worry about us passing data to the Chinese government, so we are not spying for a state.
- what you are accusing us of, others are doing too (unsaid is: or they are dumb enough not to know how to do it effectively)
To me, asking for oversight on TikTok specifically is
anti-competitive.
It is a bit the same with HuaWei. Having worked in projects
in the Telco field, I have some idea of data ownership. I do not think it is an
exaggerated fear that ‘foreigners’ can ‘see our data’; but that works for all
foreign companies, why just HuaWei?
What the Ukraine conflict should have taught us is that conflicts
can blow up anywhere.
Furthermore, while at some point in time people thought that
MNCs, as pure capitalist beasts, would be purely profit driven, this has been
debunked; MNCs do care more about where they are headquartered/ formed than
other places. Hence, not only for HuaWei. TikTok, there is a risk of governments
getting involved. But to me, the key is, any company, any government.
Add to this how governments have been changing in the recent
years, agreements reneged (I am looking at Mr Trump for example), and you have
dirty waters everywhere.
It is ok that if you are not China you are wary of HuaWei,
but that should also mean that if you are not Finnish, you should think about
Nokia for example…
3 Missing the forest for the trees
Whether wilfully or not, the whole argument about regulating
AI is missing the forest for the trees. People keep discussing about this tree,
or that tree, ignoring the fact that the whole clump of trees forms part of a
forest, with all sorts of trees, and animals and birds.
This is an issue that economists are very familiar with (the
difference between micro and macro economics), and that Singapore government is
very familiar with (abandoning the concept of Singapore Inc)
To me AI can do a lot of good and a lot of bad (most often
both simultaneously), depending on what it is used for and whose point of view
you are taking. At this moment only some points of view have been considered.
Most of the debate is on a micro level. How can I reduce
costs by using CHatGPT. As a Clevel, I care mainly about my own bottomline, and
that is loosely tied to my employer’s. So if I use ChatGPT to replace
receptionists, call centre people, developers… I save tons. The beauty of it is
that as 1st mover, while my competitors catch up, I make fantastic profits.
Even when they catch up, who will undercut us?
But what of people being displaced?
Is UBI an option (Universal Basic Income) is being trialled
in the UK (2) and has been trialled around the world (3).
Just think of this, if a whole chunk of people lose their
jobs, unless the cost of products and services drops accordingly so people can
afford the products and services whose costs have dropped due to AI replacing
some humans, how will they buy stuff?
What is the point of producing more efficiently, if people
cannot afford to buy what you produce? Add to this that the cost of AI, say ChatGPT
is today low, will it remain so for ever? People who have used new disruptors
products and services such as Uber/Grab certainly have their opinion on this.
4 but what is the real problem?
The real problem, in my opinion, is that AI and anything is
the marketing around it and people’s expectations.
I attended Microsoft Build. And I came out of it pretty
excited. Especially the safeguards they are trying to impose in terms of
responsible AI. One of the very important aspects is human oversight.
However, the real issue is that people think AI is the
solution to everything.
Everyone is “ChatGPT-ing”(4), expecting miracles. It is a
language model, not a truth model. But the reality is, what is the cost of
mistakes it will make?
As long as people’s expectations are tempered, they truly
understand there will be errors, and budget for the costs of these errors, then
it is ok to directly use these models to answer business questions, say process
orders…
Who needs Amazon when you can ask ChatGPT where you can find
the product you are looking for at the lowest price? Sometimes you will get
lemons. As long as you are prepared for that…
But to trust Chat GPT with opinion pieces is a different
ball game altogether…
5 So what is the solution?
The solution is simple: education. People, users, need to be
educated about the risks, so they choose which tool to use when.
Education, not regulation.
Bottom up, not top down.
And I dare say that the tech industry has been more about
marketing and jockeying for position than educating.
If we want AI to help humanity in general, this has to
change.
- https://en.wikipedia.org/wiki/Virtue_signalling
- https://www.theguardian.com/society/2023/jun/04/universal-basic-income-of-1600-pounds-a-month-to-be-trialled-in-england
- https://en.wikipedia.org/wiki/Universal_basic_income_around_the_world
- Please note I am using ChatGPT as a convenient stalking horse, I am talking about the popular use of AI tools in general.
No comments:
Post a Comment