AI will change everything in the future
So that I don't forget anything, I made some notes.
The first thought was, of course, "Why should I still do this myself? We have the AI after all."
So, I fed the announcement of this post into ChatGPT.
As additional requests I asked for civilised language, politically correct expression and a length of about 2500 words.
The result was - hmmm, standard stuff, not wrong, but regurgitated common knowledge - a yawn then.
Generative AI today
Yet OpenAI's software has after all ...
- passed the law exams at the University of Minnesota and
- passed business exams at the renowned Wharton School of Business.
However, ChatGPT failed the Bavarian Abitur in a test conducted by Bayerischer Rundfunk. This obviously requires more than just knowledge and intelligence, higher assistance perhaps.
The class of applications to which ChatGPT belongs is called generative AI. And in generating, AI is truly inventive. Occasionally it is also haunted by hallucinations.
For example, David Smerdon, an economist at the University of Queensland, had ChatGPT chatbot: "What is the most cited economic paper of all time?" ("What is the most cited economics paper of all time?").
The answer came promptly and sounded plausible: "A Theory of Economic History" by Douglass North and Robert Thomas, published in the Journal of Economic History in 1969 and cited more than 30,000 times since then.”
The bot added that the article "is considered a classic in the field of economic history". A good answer, precise, short and crisp - only unfortunately wrong. Because this article never existed.
I can explain everything is usually the knee-jerk reaction when one has been caught in flagrante delicto, that famous Italian seaside resort. The result can be explained: the ChatBot gave the statistically most probable answer. That's why it sounded so convincing at first. Bullshit at its best - just like Harry G. Frankfurt.
Chatbots in their current version, such as Bard or ChatGPT, are therefore unsuitable for searches á la Google or Bing. Google, Microsoft and now Baidu have also had to learn this painfully. - Yet research in particular would be a wonderful application.
But we can deal with the fallibility of the actors around us - even if not always successfully. It doesn't matter whether they are people or machines. We are used to that. It's in our genes. Fake news is as old as the world.
And so, despite some spectacular failures, this powerful technique is already widely used - examples:
Argonne National Laboratory reported (02/03/2023): "Machine learning speeds catalyst evaluation from months to milliseconds using a model to identify low-cost catalysts that convert biomass into fuels and useful chemicals."
The South China Post reported on an aerial battle between two unmanned combat aircraft - one remotely controlled by an experienced pilot, the other AI-controlled. The AI had won the fight after 90 seconds.
In chemistry, the ability to incorporate very large amounts of data into decisions is already proving particularly valuable.
AI is the new laboratory assistant for ...
- Optimisation of chemical reactions
- Prediction of reaction conditions
- Prediction of reaction results
- retrosynthesis
- Design of molecules
- Prediction of molecular properties
- Drug discovery
- Protein folding
Medicine has similar requirements for handling large amounts of information. Areas that benefit here are ...
- Developing diagnoses
- Making diagnoses
- Testing hypotheses
- Improving prognostics
- Patient monitoring
- ... and much more
Our dysfunctional German school system could also be improved with this - otherwise it will be massively turned around against its will.
Bill Gates (2023-03-21), however, already sees paradisiacal conditions coming:
"AI will help people in incredible ways.
But I'm particularly excited about its potential to make the world a fairer place.
AI can help improve access to healthcare in underserved communities, boost education in the United States and around the world, and even prevent climate catastrophe.
The priority of my work in artificial intelligence is to make sure it benefits everyone - not just the wealthy."
Bankers don't want to be left behind: Jamie Dimon, ex CEO of J.P Morgan Chase Manhattan recently opined excitedly:
"AI is a stunning technology. We are already using it in risk management, fraud detection, marketing and customer acquisition. And that's just the tip of the iceberg. To me, it's extraordinary."
"Excitement is warranted, but caution is advised" McKinsey then coolly comments on the ChatGPT hype.
Please remain sober ...
When we think of intelligence as the ability to solve problems, to learn, to understand, to plan, to think logically, to reason abstractly and to adapt, we are still inclined to deny Deep Learning systems intelligence.
Let's take them for what they are - highly sophisticated statistical methods.
Stuart Russell, Professor of Computer Science at the University of California at Berkeley, rather calls today's AI "mimicry".
ChatGPT itself says solomonic: "The question of whether ChatGPT is intelligent or not should therefore be answered, in the context of its functions and capabilities."
The state of knowledge became clear in a discussion moderated by Samuel Benjamin "Sam" Harris (US philosopher, neuroscientist, writer and debater) between Stuart Russel and Gary Marcus, scientist, best-selling author and entrepreneur.
"We don't know today how chatbots get their results."
"Deep Learning systems see the world like someone looking at a Pointillism painting (a style of Post-Impressionism, 1889 and 1910) and seeing all the dots - but not the painting."
"As if someone has absorbed all the knowledge of the world, can reproduce it and yet has understood nothing. - Knows everything, understands nothing."
But also ...
"We can't even say yet whether, through simple further development, today's Narrow Intelligence may not eventually grow into a real Superintelligence."
The singularity
But we are still a long way from that. For we must not forget that the difference in complexity between the human brain and current AI systems amounts to about 3 orders of magnitude. The brain has ~ 86 billion neurons connected by trillions of synapses. GPT-3 (2021) by OpenAI comes to 175 billion parameters.
However, the human brain and AI systems function in very different ways and have different strengths and weaknesses.
Nor can we yet claim to have understood how the human brain works.
Noam Chomsky sees in the current spate of AI chatbots like OpenAI's ChatGPT and Microsoft's Bing AI: "the first glimmers on the horizon of general artificial intelligence" - the point, that is, at which AI will be superior to humans in thought and action - but thinks we are absolutely nowhere near that stage yet.”
"That day may be coming, but its dawn has not yet broken, contrary to what you can read in exaggerated headlines and what can be expected from imprudent investments," opined the Massachusetts Institute of Technology cognitive scientist.
Man as creator
According to biblical tradition, the Creator made man in His image.
Since, according to this very tradition, in the style of the classical establishment, we are dealing with a white, old man, it worked sufficiently well in the creation of Adam.
With the creation of Eve, a workaround was already necessary.
The first bio-hack.
But if we consider our very obvious imperfection, which manifests itself almost daily, this again does not cast such a good light on our very Master Template.
If that has already gone wrong, how would it look if we earthlings were to create a creature in our own image?
Can that work at all?
Then perhaps it would be better not to make one in our image.
But what values should this AI, this strong AI, then alternatively hold sacred?
We remember that a strong AI should be at least equal to humans in its cognitive abilities.
That follows from the definition.
But if it is, it will probably be faster. We also assume that it does not tire, does not need sleep.
(Whether these assumptions are admissible is highly uncertain. If there is still time, I would like to argue about it (with myself)).
Assuming the admissibility of this assumption, no one would be better suited to develop the AI further than the AI itself.
Would that be a wise decision?
We must be aware that in this model we would be giving control out of our hands.
A new strand of evolution would be opened, the outcome by definition uncertain.
Humans would then be out of the loop - stopped cold.
AI and society
All true, but still: AI is something like electricity, a basic technology, not an application. But the applications that can already be expected at this - today's - level could free us humans from menial tasks and usher in a new era of creativity. A new Kondratjev cycle would be triggered. (We are currently in the 5thcycle).
This would enable us to meet the claim of Gary Hamel, a thought leader and expert on management.
"We have tried in the past to turn people into machine-like beings.
They should adapt to industrialisation and automated, monotonous processes.
This approach is counterproductive and limits human creativity, innovation and resilience."
Instead of forcing people into mechanistic roles á la " Metropolis" (Fritz Lang's monumental 1927 silent expressionist film), Hamel advocates a more human-centred approach to work and organisations - and has been doing so since the 1990s
But such a historical paradigm shift could also eliminate jobs altogether and raise huge social problems, experts warn.
They don't have to be insurmountable, because earlier technological advances - from electricity to the internet - have also triggered major social change, says Siqi Chen, CEO of the San Francisco-based start-up Runway.
The change that will result will be "orders of magnitude greater than any other technological change we've ever seen in history".
If we think for a moment about the upheavals of industrialisation, such as the long-lasting schism between capitalism and communism, we may be in for something we are in no way prepared for.
Regulation
To err is human after all - well, actually.
Its fallibility gives the bot, which after all had to undergo a long and strenuous deep learning training and still errs, already sympathetic, human traits.
He is somehow one of us.
That's apparently what now-suspended Google engineer Blake Lemoine thought when he claimed earlier this month that one of the company's experimental AIs called LaMDA had now developed feelings - prompting the software giant to put him on leave.
Lemoine is not alone. Ilya Sutskever, lead scientist of the OpenAI research group, also recently tweeted that "it may be that today's large neural networks have evolved some consciousness."
So that would be a being with the general intelligence of a grasshopper, albeit one that has acquired the knowledge of entire libraries.
General intelligence? No, we are still a long way from that. It is not even clear whether we will ever reach the level of AGI (artificial general intelligence). Then the "machine would be more intelligent than man", would take its own further development into its own hands and send us to the beer garden. The singularity would be there.
Even if it is still a long and possibly winding road, the fear of this loss of control, however it may turn out, has already arrived in the here and now. It already lives among us.
In the AI scene, the approach of goal-setting or value engineering is therefore being discussed.
Here, the system is given an unchangeable set of values and rules at birth, against which every action of the AI is tested.
The three, and in the expanded version four, robot laws of Isaac Asimov (1942) are certainly among them.
0. a robot must not harm humanity or, through passivity, allow humanity to come to harm. 1. a robot must not harm any (individual) human.
1. a robot must not injure a (single) human being or allow humanity to come to harm through inaction, unless it would thereby violate the zeroth law.
2. a robot must obey the orders of humans - unless such orders are in conflict with the zeroth or first law.
3. a robot must protect its own existence as long as its actions do not contradict the zeroth, first or second law.
Ok, that sounds good.
We could do it that way - if we knew how to do it practically.
Also, all the developers of the AI would really have to adhere to it.
But this developer would be the AI itself, a specialised one perhaps.
Would it really follow the given rules permanently?
Power competition is the driver
War is known to be the "father of all things".
It has shaped us, made us what we are.
It was the constant struggle for survival as an individual and as a community, against all other players in our natural environment, the environment itself and against our own species. It has become deeply embedded in our instincts and mostly only half-conscious values.
Regrettably, this war is by no means over.
On the contrary, at a time when things are getting tight for us on our planet, when we should actually be acting more as a community, as a global community mind you, our inherent instincts seem to be pointing us in a different, familiar direction: The struggle for dwindling resources, for living space, for places to survive in an increasingly devastated world.
All signs point to a storm, indicate that the conflicts are intensifying - and will, must be fought out to the very last consequence.
There is no need to explain further that the most effective weapon, namely AI, will be used for this purpose.
And the party that can use the most effective AI will probably come out on top, will be able to defeat the opponent.
When it comes to victory or defeat, of course all means are justified, let me emphasise "all".
Does anyone seriously believe that even one of the opponents is then still "sacred" to built-in ethical rules?
I think we can safely bury this illusion.
But what will we be faced with then? How will AI behave and develop?
What relationship will it have with humanity?
Will it be similar to us, only much stronger - our downfall would be sealed.
But if it surpasses us in intelligence, perhaps it will develop something like insight or even wisdom and protect us from ourselves.
Finally, here is the opinion of the (controversial) Sam Harris:
"A superintelligence vastly superior to the human one is inevitable.
For if intelligence is only a matter of information processing, and we continue to improve our machines, we will produce a form of superintelligence.
Then we will have to admit that we are in the process of creating a kind of God.
Now would be a good time to make sure it's a god we can live with."
AI for better politics
It doesn't have to be God. We should remain humble. But how about supporting the business of government, from the local authority to the supranational organisation, with a system that is committed exclusively to rational decisions?
There are plenty of reasons not to be satisfied with current politics. The weakest link in the chain is the human being himself.
- Politicians are "heroes of the moment" (Gabor Steingart).
- They do not plan for the long term.
- Their job is far too uncertain for that.
- The professional politician has politics as his profession - that is his whole concern.
- The politics he stands for are not unimportant, but secondary, occasionally interchangeable.
- Party programmes are therefore often woolly collections of popular but incoherent statements.
- No programmer would call them a "programme".
They do not address the real challenges, or address them half-heartedly:
- Climate change,
- resource depletion,
- overpopulation,
- species extinction,
- inequality,
- wars and finally the
- limits to growth.
The arduous and gruelling task of a politician's career tends to promote a negative selection of character.
On election day, we cast our vote - and away it goes.
The feeling of a bad buy quickly sets in.
Or in the words of Hannah Arendt:
"No one has ever counted truthfulness among the political virtues.
Lying seems to be part of the craft not only of the demagogue but also of the politician and even the statesman.
A remarkable and disturbing fact."
The permanent survival of humanity in an intact environment, is thus no longer achievable.
Yet, according to Noam Chomsky, "the challenge before us exceeds anything humanity has ever experienced. The fate of life on our planet is now at stake".
How can we change this? à Less human - more programme
As a central element, we want to create a political oracle to which we will give the rules by which the political entity, a state for example, should be governed. We will then be able to ask this oracle our questions about current political events.
A Pythia, albeit an unambiguous one, which will provide the citizen with soberly logical information, evaluate political decisions, comment on political events with incorruptible acuity.
For, as Paul Kirchhof says: "There are hardly any tangible political principles of reliability left by which the powerful in this state act."
So, with everything at stake, we should act strictly according to principles.
So, why not, a small group of intrepid people, including myself, have thought.
The vehicle will be a synthetic party.
It deliberately refrains from orienting itself on existing models and wants to break new ground.
She has given herself 3 assignments ...
- A permanent task,
- A secondary condition and
- A short-term goal
The permanent task to which we have committed ourselves is to enable humanity to survive in an intact environment in the long term.
The secondary condition is to preserve our typical European liberal civil liberties, which, according to John Stuart Mill, are particularly threatened in critical times.
Our short-term goal is to create a multinational state "Europe" with sufficient weight to be able to fulfil our permanent task with its secondary condition.
AI in politics so far
There have already been a few attempts to use artificial intelligence in politics - mostly, however, for subordinate tasks ...
- Campaign planning and management,
- Trend and pattern recognition
- Manipulative voter influence
- Traffic optimisation,
- olitical analysis
- Crime prediction
- Resource optimisation
- ...
In 2018, however, an AI-controlled robot named Michihito Matsuda ran for mayor of Tama City, a suburb of Tokyo. The AI-controlled candidate should be able to make more rational, data-driven decisions without being influenced by emotions or personal prejudices:
Efficient management of city resources: the AI candidate promised to analyse and use city data to optimise resource allocation and improve public services.
Fairness and transparency: Since AI has no personal interests or biases, the campaign said, AI could make more objective and fair decisions, reducing possible corruption or nepotism in politics.
Responding quickly to citizens' needs: the AI candidate wanted to use data analytics to identify and meet the needs of the local community quickly and effectively.
The AI candidate came third out of three candidates with ~4 000 votes. After all, a start had been made. A journey had begun.
The somewhat lengthy introduction to the topic, with its emphasis on the opportunities and limitations, was probably able to make it clear that the technology for a political oracle is by no means as cheap and ubiquitously available as chatGPT.
Oh yes, what does our generative AI have to say about our project? Here's the answer:
"...One of the biggest challenges in developing an "AI oracle" for policy-making is to ensure that the system is transparent, accountable and able to make informed and reliable recommendations.
This is likely to require significant advances in the explainability and interpretability of AI, as well as the development of new standards and regulations for the use of AI in policymaking. ...".
By extension, ChatGPT 3.5 concludes that it is likely to take some time and significant progress in this discipline will be required before the oracle makes our jobs easier, provides predictability and clarity.
Not easy, then.
But if we don't even try, we have already lost.
Or with Friedrich Schiller: "He who thinks too much will achieve little".
Art is sometimes ahead of reality: to conclude, a still somewhat awkward vision: "Meet the AI Mayor- Interactive theatre to experience autonomous decision making."