World Artificial Intelligence Cannes Festival WAICF
This week the WAICF has been held in Cannes: it has been 3 days marathon, with many expositors and a lot of presentations. The very good new was that it was possible to register for free, with some limitations; I have been then able to attend part of the presentations and visit many stands without having to pay for a ticket. There have been so many presentations that my brain oveflowed, and it is very difficult to report all the things I saw.

In general most of the presentation have cited ChatGPT at least few times: ChatGPT has contributed to change the way people look to AI. Before it, AI was far from people imagination: now most of us have realized that AI is here to stay and that it will have a real impact on hour lives. Of course not all that glitters is gold, and despite a lot of limitations, people start thinking at AI positive or negative implications on our lives.
There is the concrete fear that AI algorithm will be used against common people interests: many will lose their jobs because “intelligent” machine can replace them, also creative jobs that everybody believed not automatable. For instance, in a game development company, many creative artists and developers have reported they fear what AI can do to them: stable diffusion can quickly generate many impressive images, and ChatGPT can spot many bugs in source code… But this is not the only source of concern: AI is already applied in many services and can be easily used to take advantage of the customers. Is the recommended hotel just the one that optimize the web site profit, is some sort of bias introduced because you have been identified as belonging to a user group that can pay more the same service? For instance an algorithm, for a female audience, could increase lipstick prices and reduce drill prices – in the end the averages prices could be equitable, but it is just cheating.
The ethical problem was approached in many presentations. Somebody can be interested in visiting the AI for food site or Omdena: if AI can be used for bad purposes, it is also true that you can use it to do great things like United Nations projects, promote gender equality or predict cardiac arrests.
In his talk Luc JULIA, chief scientific officer of Renault, was comparing AI to an hammer: you can use it for good purposes – put a nail in a wall, bad purposes – hit your noisy neighbor, but there is a very important thing: the hammer has an handle. Its our responsibility to hold the handle and decide what we want to do with it. My consideration, it is quite unfair if holding the handle are just big companies – hence the need of some sort of regulation.
How to regulate by law AI usage? What should be encouraged, what should be forbidden, and how a customer can fight with a big company if they have been somehow been damaged? Proving that a system is discriminatory requires a lot of effort on the offended part: sample the system behavior, identify in which context it is unfair, demonstrate that the damage is not marginal but it is worth a judge attention… Creating a new law also can take years and the evolution in this domain is so fast that there is a concrete risk the law is born already obsolete. Not the same level of regulation must be applied everywhere: an autonomous vehicle requires much stricter regulation than a recommender system, of course. The legislator has to choose the right tool, just a recommendation, some incentives if some behavior are respected, or fines in other cases.
Coming back to the hammer metaphor, how good is the tool that we have? Stuart Russel, from Berkeley University, did a speak on General Artificial Intelligence, reporting many cases where it is possible to make ChatGPT or other systems fail just because they don’t really understand the meaning behind questions. For instance one reporter (Guardian) asked a simple logic question about having 20 dollars and giving 10 to a friend, asking how many dollars were there in total: according to ChatGPT there were 30 dollars. There are new paradigm that we can explore in future to build better systems like “probabilistic programming” and assistance games theory. The last one is very fascinating: what is the risk of having machines take control over the hammer handle, as they can evolve so fast and accumulate a level of knowledge no human can achieve? A wrongly specified objective can lead to disaster, but in assistance games the machine is just there to help their master and does not know its own real goal: so it just try to help and not to interfere too much.
Another subject of interest was the bias: we are training AI systems with real data. These systems are built trying to optimize some function, for instance a classifier is trained to assign a class to a sample in the same way it happens in reality; but what if reality is unfair? We all know women’s salaries are lower that men’s, we definitely do not want that AI systems perpetrate the same injustice when used in production. Nobody has today a solution for this, and it is indeed a good business opportunity. No company wants to have a bad reputation because they applied a discrimination, and few companies have the capacity of developing themselves a bias filtering. As AI will be democratize there will be the need of standard bias cleaning application, in many contexts.
The same applies to general AI models: AI is being democratized, some companies or organizations will be able to craft complex models, all the others will in the end buy something precooked and apply it in theirs applications. A real AI project requires a lot of effort in many phases: data collection, data verification, feature extraction, training, resource management (training require complex infrastructures), monitoring. This is the reason all the IT players are focusing on creating AI platforms were customers will be able to implement theirs model (SAS Viya, Azure machine learning, IBM Watson…)
Another issue with bias is the cultural bias: chat GPT is trained on English contents, but not everybody is a fluent English speaker. Which solutions exists for other languages? Aleph Alpha was presenting their model working on multiple languages (German, French, Italian…). One interesting thing I saw is that it does a big effort in trustability; the presenter showed it was possible to identify why an answer has been chosen in the source text used for training. One can decide if the answer was just randomly correct, or if it was correct for a good reason. Theirs system is also multi-modal, you can index together text and images, and text is extracted also from the images: nearly all documents have both so it is interesting to have one system that can work on that.
Trust is a word that was often used in the presentations. If we start having AI algorithms applied in reality, we want them to be somehow glass-box algorithms. A regulator must be able to inspect it, when needed, to understand if something illegal has been done (for instance penalize employees that have taken too many sickness leave days). The company that is developing it must understand how much it is reliable, and if it is using the information we expect it to use: if a classifier is deciding a picture represents an airplane just because there is a lot of blue sky, well it is not a good tool. We also must be sure that personal or restricted information do not leak into an AI model, that then is reused without our permission, or with a purpose we do not approve: our personal information is an invaluable asset.
The Credit Mutuel bank is developing many project with AI, what they reported is interesting: they conducted some experiments in humans evaluating a task alone, AI alone on the same task, and then 2 other scenarios where humans could decide if to use AI, or where they were forced to se AI propositions. The most successful scenario was the last one: the AI hammer is useful and we need to realize that it is here to stay. We also need to understand who is accountable of an AI system: people have to know what they can do and what they cannot do with AI, and this must clear and homogeneous inside the same company. When you have multiple project you realize that you need a company policy on AI. Somebody from Graphcore also suggest to focus AI projects on domains where the cost of error is low: use AI to automate writing summaries from long documents is much less dangerous than developing an airplane autopilot. they are just two different planets in term of accountability.
To conclude I would like also to cite Patricia REYNAUD-BOURET work on simulating how the human brain is working. She is a mathematician, working on simulating real neuron work: she described us how neuron works, that it is important how many stimuli are received in a time range, and this makes one neuron be activated and propagate a signal to other neurons. We have about 10^11 neurons in our brain, but is it possible to simulate theirs activity on a computer, maybe even a laptop? With some mathematical assumption it is possible to do something at least for specific brain area. This and similar work will be useful to understand maladies like epilepsy… We should all remember that pure research is something we need to foster, because in the end it will unlock incredible results.
Leave a comment