Sunday 18 June 2023

Singapore firms found arming Myanmar, how?! what analytics is missing?

Myanmar is a country that is close to my heart, I felt at home in Yangon, loved the place and the people; I left before the new junta took over. Analytics is what I do for a living. So I am a little bit ‘apalled’ (1) that firms based in Singapore, using Singapore ports have been sending weapons to Myanmar; given that the Singapore government is at the forefront of finding a solution to the Myanmar crisis. (2). Singapore has made a ‘principled position’ against the Myanmar military’s use of lethal force against unarmed civilians (3)

 




How do we know the list of firms?

The beauty of this, is that it is from the horse’s mouth. It seems the list of arms providers comes from a leak from the procurement department of the Myanmar Ministry of Defence (4). While it seems that there is some debate around which firms have been selling arms to the Myanmar military after the ‘coup’ and the following sanctions.

 

How deep does the involvement of Singapore related companies go?

First of all, these are Singapore registered companies, under the purview of corporate regulatory authority. Secondly, if they used Singapore ports, a whole range of organisations must have been overseeing their operations, from the ports authority to make sure the physical movement takes place without hitch, to customs who are in charge of the legality of the trade. Now, I am in no way suggesting that there was any complicity, nor am I suggesting all containers need to be checked. But I do think analytics could have helped.

When trade takes place, most of the time, there is trade financing and insurance. Financial institutions get involved. Most of the time someone loans money for the transaction, someone will insure the contents. Again, I am not saying that some Singapore bank or some Singapore based insurer is involved, but chances are, there was. The analytics teams of the banks and insurers must have had a look and approved the risk involved. These transactions that broke the sanctions imposed by the Singapore government were not picked by these institutions.

We must also remember that, unless the military equipment has been manufactured in Singapore, they must have been imported, stored, then exported out. Hence the involvement of these organizations listed above is doubled.

Thirdly, there are companies who actually do the moving, the storing, but the companies listed by the procurement department of the Myanmar ministry of defense could be in that business themselves. So let’s just limit to these large institutions.

 

What could these organisations/authorities have done?

Some basic checks at the government related organisations, which I actually expect to be routinely carried out, but apparently are not:

  • Are these companies, by the license to operate, allowed to engage in export?
    • This is a basic hygiene check.
  • Do these companies regularly trade with Myanmar?
    • Any company that all of a sudden starts to trade with a sanctioned country should raise a red flag or two
  • How about the pattern on trade? Even if they trade with Myanmar, has the volume changed, or the frequency?
  • More interestingly, I doubt they would list helicopters on the manifest, but does the manifest gel with the container volume, size weight?

These are basic hygiene models that can be implemented as the front-end systems collect the data.and feed them back to back-end analytical systems. The government is engaging in building analytical platforms and has even made it possible for services from the most common commercial cloud providers to be used by government department and government related bodies.

There is no reason for these basic checks not to be implemented. Some of these are very common in say the banking sector as they assess risk of individual customers, especially those engaged in international trade. And this brings us to the private sector.

 

What could banks/insurers involved in the trade transaction have done?

I am pretty sure that financial institutions are very aware of sanctions and are bound to flag cases where these are be circumvented. It is often the case of closing the barn door after the horses have left, but the systems are built to facilitate flagging of potentially fraudulent transactions. Tweaking them to flag sanction-busting is not rocket science.

Even better, it is not difficult to predict the business as usual flows for many corporate banking customers, I have done that myself and I am sure many others have too for various financial institutions. It is a basic tool that allows banks to know, in advance, what are the funds required, by who, and so on… Now if a company asks for financing when they are not likely to do so by the models, add to that that it is going to a country under sanction, then a second look should be required. And that should leave a trail that efforts have been made to ensure the propriety of the trade.

Hence, to me, if any Singapore bank facilitated the transactions whereby military equipment was exported from Singapore to Myanmar since sanctions have been imposed by Singapore, then these banks are guilty of, at a minimum laziness, or worse keeping an eye shut to let profits roll-in.

 

Conclusion

Did you notice something?  I did not use LLM, ChatGPT, or even AI. Everything I mentioned can be done by very basic models/algorithms. All it would have taken is someone to understand the business imperative, and someone to get it done. But then again, may be it wasn’t high on the list of imperatives. And that is the reality.

What is the cost of these infractions? May be some companies that made huge profits sanction busting will close down, directors get a slap on the manicure, but trade has taken place, financing profited from, ports used…

As in many cases, doing the analytics, implementing them is easy, understanding the need for them and the will to use them is often the stumbling block. This is probably the case in this circumstance: when there is no will, way doesn’t matter.

 

  1. https://frinkiac.com/gif/S08E08/770702/772154/SSBBTSBTSE9DS0VEIEFORCBBUFBBTExFRC4
  2. https://www.mfa.gov.sg/Newsroom/Press-Statements-Transcripts-and-Photos/2023/05/20230519-Comments-on-Myanmar-Report
  3. https://www.channelnewsasia.com/singapore/myanmar-arms-singapore-based-entities-mfa-un-special-rapporteur-3500051

  4. https://thediplomat.com/2022/08/report-claims-38-singapore-based-firms-supplying-myanmars-military/

Monday 5 June 2023

Tech Moguls call for regulation of AI, what gives?

It’s been a while since I wrote a blog post, was neck deep in project delivery, this topic forced me to make the effort to extricate myself.

Should AI be regulated as Tech moguls such as the OpenAI CEO asked for?

My simple answer is NO.

Please read on before flaming…

 

from hotpot.ai

1 Virtue Signalling

There can’t be a better example of virtue signalling (1) than that, can there?

Think of most mafia type movies when the mafia sides with the police; more often than not it is to control a threat to the mafia itself.

I am not saying the ‘insiders’ calling for regulation are like the mafia; but I do have some experience of working with an incredibly smart technical guy, who solves technical puzzle after technical puzzle and whose baby has recently been purchased and he is now able to make his mark on the world. Over beers we were discussing why would he want to (at that time, a decade ago) create and agent that automates marketing, puts it in the hands of totally non technical people, and killing the jobs of fellow technical people. His answer was simple: it can be done, and if I don’t, someone else will. He agreed that sometimes it will go wrong, but that’s about implementation.

I believe that the tech moguls also think similarly. Now that they have opened Pandora's box, the want to quickly claim that what’s coming out is not our fault, please sit on the lid.

The message is “we are using AI responsibly, but we cannot comment on how others will use it. So please protect us from evil”.

To some degree, on top of being virtue signalling, this can also be seen as being anti-competitive. Hey, and you had the impression that the USA was all about competition?

 

2 Anti Competitive

Just look back at the TikTok debate that went all the was to congressional hearings. I am not into TikTok, and have not been following closely, but form what I have read, TikTok’s defence, if I can be so rough as to summarise in 2 main points:

  1. we are not affiliated to the Chinese government, so please do not worry about us passing data to the Chinese government, so we are not spying for a state.
  2. what you are accusing us of, others are doing too (unsaid is: or they are dumb enough not to know how to do it effectively)

To me, asking for oversight on TikTok specifically is anti-competitive.

It is a bit the same with HuaWei. Having worked in projects in the Telco field, I have some idea of data ownership. I do not think it is an exaggerated fear that ‘foreigners’ can ‘see our data’; but that works for all foreign companies, why just HuaWei?

What the Ukraine conflict should have taught us is that conflicts can blow up anywhere.

Furthermore, while at some point in time people thought that MNCs, as pure capitalist beasts, would be purely profit driven, this has been debunked; MNCs do care more about where they are headquartered/ formed than other places. Hence, not only for HuaWei. TikTok, there is a risk of governments getting involved. But to me, the key is, any company, any government.

Add to this how governments have been changing in the recent years, agreements reneged (I am looking at Mr Trump for example), and you have dirty waters everywhere.

It is ok that if you are not China you are wary of HuaWei, but that should also mean that if you are not Finnish, you should think about Nokia for example…

 

3 Missing the forest for the trees

Whether wilfully or not, the whole argument about regulating AI is missing the forest for the trees. People keep discussing about this tree, or that tree, ignoring the fact that the whole clump of trees forms part of a forest, with all sorts of trees, and animals and birds.

This is an issue that economists are very familiar with (the difference between micro and macro economics), and that Singapore government is very familiar with (abandoning the concept of Singapore Inc)

To me AI can do a lot of good and a lot of bad (most often both simultaneously), depending on what it is used for and whose point of view you are taking. At this moment only some points of view have been considered.

Most of the debate is on a micro level. How can I reduce costs by using CHatGPT. As a Clevel, I care mainly about my own bottomline, and that is loosely tied to my employer’s. So if I use ChatGPT to replace receptionists, call centre people, developers… I save tons. The beauty of it is that as 1st mover, while my competitors catch up, I make fantastic profits. Even when they catch up, who will undercut us?

But what of people being displaced?

Is UBI an option (Universal Basic Income) is being trialled in the UK (2) and has been trialled around the world (3).

Just think of this, if a whole chunk of people lose their jobs, unless the cost of products and services drops accordingly so people can afford the products and services whose costs have dropped due to AI replacing some humans, how will they buy stuff?

What is the point of producing more efficiently, if people cannot afford to buy what you produce? Add to this that the cost of AI, say ChatGPT is today low, will it remain so for ever? People who have used new disruptors products and services such as Uber/Grab certainly have their opinion on this.

 

4 but what is the real problem?

The real problem, in my opinion, is that AI and anything is the marketing around it and people’s expectations.

I attended Microsoft Build. And I came out of it pretty excited. Especially the safeguards they are trying to impose in terms of responsible AI. One of the very important aspects is human oversight.

However, the real issue is that people think AI is the solution to everything.

Everyone is “ChatGPT-ing”(4), expecting miracles. It is a language model, not a truth model. But the reality is, what is the cost of mistakes it will make?

As long as people’s expectations are tempered, they truly understand there will be errors, and budget for the costs of these errors, then it is ok to directly use these models to answer business questions, say process orders…

Who needs Amazon when you can ask ChatGPT where you can find the product you are looking for at the lowest price? Sometimes you will get lemons. As long as you are prepared for that…

But to trust Chat GPT with opinion pieces is a different ball game altogether…

 

5 So what is the solution?

The solution is simple: education. People, users, need to be educated about the risks, so they choose which tool to use when.

Education, not regulation.

Bottom up, not top down.

And I dare say that the tech industry has been more about marketing and jockeying for position than educating.

If we want AI to help humanity in general, this has to change.

  

  1. https://en.wikipedia.org/wiki/Virtue_signalling
  2. https://www.theguardian.com/society/2023/jun/04/universal-basic-income-of-1600-pounds-a-month-to-be-trialled-in-england
  3. https://en.wikipedia.org/wiki/Universal_basic_income_around_the_world
  4. Please note I am using ChatGPT as a convenient stalking horse, I am talking about the popular use of AI tools in general.