2023.05.18
As Greek philosopher Heraclitus once said, the idea that the only constant in life is change. Today, the world is undergoing a metamorphosis never seen before, and this is all thanks to the technology known as “artificial intelligence”, or simply AI.
Research by McKinsey shows that AI adoption across organizations worldwide has more than doubled between 2017-2022. So what makes this technology so fascinating and different from anything we’ve seen before?
What is AI?
As the name implies, AI involves the transposition of human intelligence onto an “artificial” entity, such that it can function the same way as humans.
John McCarthy, one of the first founders of the discipline, defined AI as “the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”
However, decades before this definition, the birth of the artificial intelligence conversation was discussed in British mathematician Alan Turing’s seminal work, “Computing Machinery and Intelligence”, which was published in 1950. In this paper, Turing, labeled by many as the “father of computer science”, poses the following question: “Can machines think?”
From there, he offered a test now famously known as the “Turing Test”, where a human interrogator would try to distinguish between a computer and human text response. The entire section of this paper was dedicated to what he calls “The Imitation Game”, which later became the title of a biographical film (2014) on Turing’s life.
While the “Turing Test” has undergone much scrutiny since its publication, it remains an important part of the history of AI as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.
Two major contributors to modern studies on artificial intelligence are Stuart Russell and Peter Norvig, who jointly published what eventually became the most popular AI textbook in the world. In it, they delve into four potential goals/definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting: 1) Systems that think like humans; 2) Systems that act like humans; 3) Systems that think rationally; and 4) Systems that act rationally. Alan Turing’s definition would have fallen under the second category.
Below is a definition of AI provided by IBM, which famously developed its own AI-based question-answering computer system that eventually won the $1 million prize on the American game show Jeopardy!
At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.
To simplify even further, AI is essentially the simulation or approximation of human intelligence in machines, as described by Investopedia.
More Than Robots
When most people first hear the term AI, the first thing they usually think of is robots. That’s mostly the byproduct of big-budget films and pop culture, and there’s much more to AI than just robots.
Pretty much anything that is able to mimic human cognitive activity can be considered AI nowadays. This can be something really small and intangible like an app on your smartphone (which itself is also AI driven, like Siri for iPhone), or an upgrade on a tool/machine we already use (i.e. self-driving cars).
Actually, it would be impossible to list out every single example, because the possible applications for AI are just endless. The technology is already in use across many different sectors and industries, ranging from healthcare and science to retail and banking, as they can perform the most complex tasks in those fields with pure precision.
Even for the simplest of tasks, we are beginning to rely on AI.
When was the last time you used a physical map to guide your trip? Google Maps, using AI technology, has every corner of the Earth covered for us. Have a question about your Internet bill? A chatbot can get that resolved for you within minutes. Then there’s social media, where everything we see is being pushed through to us by AI to appease our appetite.
As I’m writing this article, AI is “at my side” to correct any grammatical errors and typos; Before you know it, every aspect of our everyday life is connected to and highly dependent on AI.
Era of Generative AI
But even having tasted the benefits of AI when performing our daily tasks for years, we still had a tendency to treat AI as one for the future, a technology with room for improvement. After all, expectations placed on machines are quite different than that placed on ourselves.
While there’s nothing wrong with that line of thinking, it doesn’t mean the AI takeover isn’t happening in the present.
We’ve only just started to realize that after the emergence of ChatGPT — the world-famous chatbot designed to mimic a human conversationalist and produce answers to abstract and complex questions from users.
Short for “chat generative pre-trained transformer”, ChatGPT can help us to write business pitches, compose music, teleplays, fairy tales and student essays, answer test questions, write poetry and lyrics, and many more. Critically, it has demonstrated astonishing capabilities to do each of these just as brilliantly as the professionals in every sector.
What’s more, it only takes about a month to train the ChatGPT model to learn something that may take humans years, and for every request, the chatbot can provide a response within a matter of seconds. And a majority of the time (53%, according to the latest stats from Tooltester), we can’t even tell whether the content is generated by AI or a real person.
Now everyone is talking about ChatGPT, and more and more people are using it. In fact, it is the fastest-growing application ever, adding 100 million monthly active users just two months after its launch in late 2022. In comparison, it took Netflix 3.5 years to reach 1 million users.
The meteoric rise of ChatGPT is considered a watershed moment for AI; it has informed us that we’re entering a new era of AI technology development, one which focuses on general-purpose rather than task-specific models.
This is what IBM said about the new wave of AI led by ChatGPT: “The last time generative AI loomed this large, the breakthroughs were in computer vision, but now the leap forward is in natural language processing.
“And it’s not just language: Generative models can also learn the grammar of software code, molecules, natural images, and a variety of other data types.
Generative AI, as described by IBM, refers to an algorithm called “deep-learning” that can take raw data — say, all of Wikipedia or the collected works of Rembrandt — and “learn” to generate statistically probable outputs when prompted. At a high level, generative models encode a simplified representation of their training data and draw from it to create a new work that’s similar, but not identical, to the original data.
While generative models have been used for years in statistics to analyze numerical data, the rise of deep learning has now made it possible to extend them to images, speech, and other complex data types.
In the new era, generative AI tools like ChatGPT can basically adopt any role we like them to, whether it’s executive assistant, customer service rep, data coding guru, or even food recipe generator.
In the not too distant future, you can bet that there’ll be more ChatGPTs waiting in line to disrupt a wide range of industries.
From ‘Weak’ to ‘Strong’ AI
The hyped-up generative AI model led by ChatGPT falls under the category of ‘Weak AI’ — also known as Narrow AI or Artificial Narrow Intelligence (ANI). The concept of Weak AI revolves around a system designed to carry out one particular job, and is what drives most of the AI tech we see or use today.
‘Narrow’ might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple’s Siri, Amazon’s Alexa, IBM Watson, and autonomous vehicles.
Another category, as you might expect, is ‘Strong AI’ — which is a much more complicated system that can carry on tasks considered to be human-like. It comprises two forms of AI: Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI).
AGI, or general AI, is a theoretical form of AI where a machine would have an intelligence equal to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI) — also known as superintelligence — would surpass the intelligence and ability of the human brain (more on this later).
While Strong AI is still entirely theoretical with no practical examples in use today, that doesn’t mean AI researchers aren’t also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey.
And with that, future advancement of AI could eventually blur the lines between our world and what we would only see in fantasy movies. For those who associate AI with robots, they’ll be somewhat validated, except the ‘robots’ could be even more powerful than would ever be imagined.
‘Double-Edged Sword’
As much as society will rely on the powers of AI moving forward, we also cannot ignore the fact that this burgeoning technology comes with great risks.
Since its beginning, AI has come under intense scrutiny from scientists and the public alike. One common theme is the idea that machines will become so highly developed that humans will not be able to keep up and they will take off on their own, redesigning themselves at an exponential rate, and eventually conquering the world (remember Arnold Schwarzenegger and The Terminator?).
In a recent interview with Fortune magazine, a former safety researcher at OpenAI, the San Francisco-based startup behind ChatGPT, said there is at least a 10-20% chance that the tech will take over the worldwith many or most ‘humans dead’. And he’s far from the only person saying that: A recent survey showed that half of AI researchers believe there’s at least a 10% chance of a human extinction.
Another concern is that machines can hack into people’s privacy and even be weaponized. Other arguments debate the ethics of using AI and whether intelligent systems such as robots should be treated with the same rights as humans.
Self-driving cars have also been fairly controversial as their machines tend to be designed for the lowest possible risk and the least casualties. If presented with a scenario of colliding with one person or another at the same time, these cars would calculate the option that would cause the least amount of damage.
A big contentious issue for years is how AI would affect human employment. With many industries looking to automate certain jobs through the use of intelligent machinery, there is a concern that people would be pushed out of the workforce. For example, self-driving cars may remove the need for taxis and car-share programs, while manufacturers may easily replace human labor with machines, making people’s skills obsolete.
What’s more scary is that we haven’t even fully computed the full extent of AI’s unwanted consequences, because we’re yet to experience them.
On paper, the dangers posed to society could be just as limitless as its capabilities; the loss of human life, if an AI medical algorithm goes wrong, or the compromise of national security, if an adversary feeds disinformation to a military AI system—these are possibilities we just can’t rule out.
This bigger existential threat lies in the development of Strong AI, which the doomsayers are calling the next “asteroid”, similar to the one that exterminated dinosaurs some 66 million years ago. Many companies are already working to build AGI, and the time it takes for it to happen may be less than people expect.
“Until quite recently, I thought it was going to be like 20-50 years before we have general purpose AI. And now I think it may be 20 years or less, with even 5 years being a possibility,” Geoff Hinton, dubbed the “godfather of AI”, previously told CBS.
And the time from AGI to the next stage, Super Intelligence, may not be very long; according to a reputable prediction market, it will probably take less than a year. Superintelligence isn’t a “long-term” issue: it’s even more short-term than, for example, climate change and most people’s retirement planning, Time magazine wrote.
Still, no one can say for certain when that metaphorical asteroid is really coming, and whether we can avoid a catastrophe by navigating around the benefits and risks with AI; a lot will depend on how it’s used, what it’s used for, who is using it.
For now, at least many of us are aware that AI is a ‘double-edged sword’ that could either make us or break us.
Timeout Needed
In late March, more than 1,000 technology leaders, researchers and other pundits working in and around AI signed an open letter warning that the technology presents “profound risks to society and humanity.”
The group, which included Tesla chief executive Elon Musk, as well as Geoffrey Hinton and two other AI “godfathers” Yoshua Bengio and Yann LeCun, urged labs to halt development of their most powerful systems for six months so that they could better understand the dangers behind the technology.
Hinton, who spent a decade at Google, recently left his post out of regret for the technology he helped develop, and he’s now actively voicing his concerns about its dangers to humanity.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
While the letter was brief, it represented a growing concern among AI experts that the latest systems, most notably GPT-4, the technology introduced by the Microsoft-backed OpenAI, could cause harm to society, and that future systems could be even more dangerous.
“Our ability to understand what could go wrong with very powerful AI systems is very weak,” said Bengio, a professor at the University of Montreal. “So we need to be very careful.”
And as they become more powerful, we need to assess the risks that would be introduced, which can take on various forms depending on how far ahead we’re talking about.
In the short term, AI experts are concerned that people will rely on these systems for medical advice, emotional support and the raw information they use to make decisions. They are also worried that people will misuse these systems to spread disinformation. Because they can converse in humanlike ways, they can be surprisingly persuasive, experts said.
In the same month, the World Health Organization called for caution in using AI for public healthcare, stating that the data used to train AI may be biased and generate misleading or inaccurate information, and the models can be misused to generate disinformation.
In the medium term, experts are worried that the new AI systems could be job killers. Right now, technologies like GPT-4 tend to complement human workers, but OpenAI has acknowledged that they could replace some workers. A paper written by OpenAI researchers estimated that 80% of the US workforce could have at least 10% of their work tasks affected by the language-learning models.
Longer term, experts are worried about losing control over AI systems. Going back to the “asteroid” theme, some people who signed the letter believe AI could slip outside our control or destroy humanity, though most say that’s wildly overblown.
Conclusion
One thing we cannot do though is turn a blind eye as we’ve done with climate change for years; there are still a good section of us in denial of a “doom and gloom” scenario. We can’t have a real-life re-enactment of “Don’t Look Up!” — the 2021 Netflix black comedy film that was released to solely make a mockery of our ignorance when facing an existential threat.
The responsibility now lies in tech companies to make AI products safe for the public and policymakers to ensure there is regulation to ensure safety. This was the stance taken by US President Joe Biden before a meeting this week with science and technology advisers, stating that “it remains to be seen whether AI is dangerous, but underscored that tech companies had a responsibility to ensure their products were safe before making them public.”
The White House has already convened top technology CEOs, including Sam Altman of OpenAI, to address AI. US lawmakers likewise are seeking action to further the technology’s benefits and national security while limiting its misuse.
On Tuesday, Altman spoke before Congress for the first time, openly admitting that his company’s generative AI requires government regulation, as its ability to interfere with elections can be regarded as a “significant area of concern”.
At the center of discussion is an AI-generated picture of former President Donal Trump being arrested by the NYPD that went viral, which lawmakers argue is a case of misinformation that could harm the integrity of the 2024 election.
Altman suggested that, in general, the US should consider licensing and testing requirements for development of AI models. When asked to opine on which AI should be subject to licensing, he said a model that can persuade or manipulate a person’s beliefs would be an example of a “great threshold.”
Christina Montgomery, IBM’s chief privacy and trust officer, has previously urged Congress to focus regulation on areas with the potential to do the greatest societal harm.
But for now, no one can say with certainty what exactly those harms are, and when they will befall on us.
Would regulations actually help mitigate those risks? Historical evidence suggests no, because violating rules (a euphemism for this is “getting creative”) is the essence of this industry. One AI researcher told Time magazine that “the power of AI is growing faster than regulations, strategies and know-how for aligning it. We need more time.”
What’s certain is that the world changes as fast as AI technology allows. Too much change can lead to, in a best-case scenario, transformation, and in the worst case, destruction.
Richard (Rick) Mills
aheadoftheherd.com
Subscribe to my free newsletter
Legal Notice / Disclaimer
Ahead of the Herd newsletter, aheadoftheherd.com, hereafter known as AOTH.
Please read the entire Disclaimer carefully before you use this website or read the newsletter. If you do not agree to all the AOTH/Richard Mills Disclaimer, do not access/read this website/newsletter/article, or any of its pages. By reading/using this AOTH/Richard Mills website/newsletter/article, and whether you actually read this Disclaimer, you are deemed to have accepted it.
Any AOTH/Richard Mills document is not, and should not be, construed as an offer to sell or the solicitation of an offer to purchase or subscribe for any investment.
AOTH/Richard Mills has based this document on information obtained from sources he believes to be reliable, but which has not been independently verified.
AOTH/Richard Mills makes no guarantee, representation or warranty and accepts no responsibility or liability as to its accuracy or completeness.
Expressions of opinion are those of AOTH/Richard Mills only and are subject to change without notice.
AOTH/Richard Mills assumes no warranty, liability or guarantee for the current relevance, correctness or completeness of any information provided within this Report and will not be held liable for the consequence of reliance upon any opinion or statement contained herein or any omission.
Furthermore, AOTH/Richard Mills assumes no liability for any direct or indirect loss or damage for lost profit, which you may incur as a result of the use and existence of the information provided within this AOTH/Richard Mills Report.
You agree that by reading AOTH/Richard Mills articles, you are acting at your OWN RISK. In no event should AOTH/Richard Mills liable for any direct or indirect trading losses caused by any information contained in AOTH/Richard Mills articles. Information in AOTH/Richard Mills articles is not an offer to sell or a solicitation of an offer to buy any security. AOTH/Richard Mills is not suggesting the transacting of any financial instruments.
Our publications are not a recommendation to buy or sell a security – no information posted on this site is to be considered investment advice or a recommendation to do anything involving finance or money aside from performing your own due diligence and consulting with your personal registered broker/financial advisor.
AOTH/Richard Mills recommends that before investing in any securities, you consult with a professional financial planner or advisor, and that you should conduct a complete and independent investigation before investing in any security after prudent consideration of all pertinent risks. Ahead of the Herd is not a registered broker, dealer, analyst, or advisor. We hold no investment licenses and may not sell, offer to sell, or offer to buy any security.