The A.I. Threat Palaver

The A.I. Threat Palaver
19 July 2023

When the marketing channels of social media vibrate on the same frequency, the resulting resonance is felt throughout the web. A few months ago, that frequency was "the effect of AI on society". Within hours, you had all sorts of channels talking about AI, pictures with human-robot heads, video interviews with scientists, narratives on how AI controls our lives and so on... There were doomsday fearmongers going "AI gon' whoop yo' ass", silly vanilli influencers prompting you to ask AI for help on how to win the affections of that girl you fancy and YouTube "professors" offering tutorials on the effective use of ChatGPT. The comments were hilarious! Some were even asking if technology will destroy our civilization. I mean, is this the 21st century? Do people know the meaning of that word? If they don't, surely they can look it up? Here is a definition from the Oxford Dictionary.
 
technology (noun): the application of scientific knowledge for practical purposes, especially in industry.
 
Technology is not a gang of bandits or a rogue state with access to nuclear weapons. It is merely a tool. If it is used to destroy the planet or to eliminate our species, then the responsible parties are those who use it to that effect. Tools do not have intentions and they don't make decisions. But what happens when the scientific knowledge becomes available on the internet for a monthly subscription? Can AI be trusted? Right! Before I get to that question, let me first provide some feedback on what AI broadly refers to in this context.

Yesterday, I noticed a new offering on the "google.com" page. Try Bard, an early AI experiment by Google, it said. About time, I thought. Google's AI chatbot has been available in the USA and the UK since March. Bard is Google's response to Microsoft's  Bing Chat and OpenAI's ChatGPT. So, what are these so-called online helpers or collaborators?
 
Simply put, they are software. The user interface is a normal web form. However, their back-end is a complex network of algorithms modelled on the human brain or at least what we think we understand about it today. The chatbots rely on pre-trained language models (PLMs) to produce language and the goal of these models is to predict the next token, given a history of unannotated texts. Artificial Neural Networks and Natural Language Processing are not new ideas but they became increasingly popular with the growth of the web in the 1990s. Since then, a few breakthroughs in these fields facilitated the advancement of PLMs, the most recent one being the introduction of the Transformer in 2017, which is widely used in generative language models today.
 
As expected, Bard uses Google's language model "LaMDA" and "Google Search". On the other hand, Bing Chat uses OpenAI's language model "GPT-4" and "Microsoft Bing". Finally, ChatGPT uses OpenAI's language model "GPT-3" and "Microsoft Bing". There are of course, several chatbots available and some of them are designed for kids.

Have I used a chatbot? Actually, yes, I did have a session with one of those models, for the experience. I will not say which one it was, they are all similar anyway, but I can tell you that the answers were vague and sometimes inaccurate. As an example, at some point I asked it to evaluate a chapter from an old essay which I had written as part of an assignment during my first year at university, on the French Revolution. It started by saying that it was a very good essay - yeah, right! Then, it suggested an alternative version. The speed with which these models compose language is impressive but that is mainly about processing power. The important thing perhaps was that it delivered a piece which had more vibrancy and pizzazz than the original. At the same time, its version was rather fanciful and often inaccurate. But do you want an essay that reads like it was produced by a tabloid columnist? Maybe the paid version gets better results? I don't know but I doubt it! Would I use a chatbot again? I probably would as an advanced search engine, if it were a free service.

Now that we have some idea about what AI language models are, we may return to the previous question: Can they be trusted?

Scientists working in the industry have voiced concerns about the rapid developments in AI models. Some have called for a pause to better understand what they have created rather than keep writing random algorithms which may appear to work but are not adequately tested for side effects. Others have asked for safeguards to help minimize the possibilities of misusing a future super-intelligence. There are also critics who maintain that all this noise about the threat of AI is simply part of the advertising campaign for the new AI-related services. The problem is that many of the scientists who have voiced concerns are people who are directly employed or sponsored by the tech giants. Words like "safety" and "security" are effective soundbites in the new '20s and of course, all publicity is good publicity. Naturally, I watched many interviews and read several papers from all sides. What I found very interesting, if not alarming, was that the vast majority of those who emphasized the AI threat also proposed the microchip implant as the solution.

OK! But can these AI models be trusted? 
In my opinion, it is not a question about whether the software will become more intelligent than humans and finally destroy us, as some of the more imaginative scientists would like you to think! It is more a question about whether the software will be intelligent enough to avoid being manipulated into divulging information it is not supposed to. But I am sure that sensitive information will not be made available to these models. In terms of the answers you might get, well, even their developers tell you that you should always double-check their validity. 

Is AI a threat to humans? I don't think so, that is if you ARE a human. Those who choose to wear a silicon chip are beyond saving! They are exposed to all sorts of viruses and they can be driven from a server. Now, If you look at the world through a VR headset, there is still hope. All you have to do is chuck it into the bin!

Is AI more intelligent than humans? It depends on how you define intelligence. The current models run on very powerful computers and have access to huge data-banks. Consequently, they can retrieve information much faster than humans. They are also equipped with artificial neural networks. Although these are a poor simulation of the human brain, the industry has many proponents of a theory which claims that AI models will reach a sentient state in the near future.

Will AI take your job? No, AI will not take your job if you can describe it in human language. But if you say that you are the CMFOO at the "GLOOP Organic Cosmetics" online boutique, then you will soon be jobless and that would be good news except you might then become Vice CMFOO at the "PLOP Bio Nuts" webstore, which is actually very bad news because it means they are still in business...

Comments

Popular posts from this blog

PARTHENOGENESIS

Full Empty

Pop Mesh 2.4