What is Artificial Intelligence or AI? 

Artificial Intelligence and machine learning are buzzwords in today’s tech landscape, captivating our imagination and shaping the future of modern life. Artificial intelligence is where computer programs will simulate or mimic human intelligence – this includes machine learning and robotic automation. 

Consumers are exposed to simple AI applications present in Google searches all the way up to purchasing and streaming TV recommendations on YouTube, Netflix and Amazon. Amazon has been using Echo and Alexa to (allegedly) make your shopping experience more user-friendly. While Netflix will suggest what movie or TV show to watch next. Creepy Siri listening on your smartphone drives the ads on your Facebook feed.

Smart tech is revolutionizing the way we live, work, and interact with the world around us. From self-driving cars to personalized shopping recommendations on streaming platforms, artificial intelligence and machine learning are transforming our lives. But what exactly are AI and ML? How do they work, and what potential do they hold?

In this article, we consider how AI developed and unpack some of the core concepts and some of the costs and benefits. We explore practical applications of AI and machine learning – the transformative power along with privacy and surveillance concerns.

Artificial intelligence and machine learning – inception

The term “Artificial intelligence (AI)” (or machine learning) was first conceived in the 1950s when it appeared as an academic discipline in universities. However, in the early decades just after its initial conception, interest and funding in AI research waned. So a kind of AI dark ages or “AI winter” ensued.

Massive advances in computational power and enthusiasm about its potential drove an explosion of AI funding in the first quarter of the 21st century. This resurgence means that AI is now present in most aspects of modern life (in both visible and less visible forms).

The Turing Test

The English mathematician and crypto-analyst, Alan Turing invented the Turing Test in 1950. Professor Turing, the father of modern computer science, including AI and machine learning, is also credited with coining the term “Imitation Game” to describe the Turing Test.

The Turing Test is a measure of a machine’s ability to demonstrate intelligent behaviour indistinguishable from that of a human. The test’s inventor, Turing, wanted to know whether a computer program could trick humans into thinking it was a person.

Artificial intelligence and machine learning developed from work of Alan Turing
AI-generated image of Alan Turing

In this test, a human evaluator and two participants have a conversation. The human evaluator doesn’t have any prior knowledge of the participants. The Turing Test is used to discover whether a machine can confidently imitate a human. The computer passes the Turing Test when the evaluator incorrectly determines the computer to be human.

The essence of the Turing Test lies in the machine’s ability to demonstrate human-like intelligence, understanding, and natural language processing skills in its interactions. It requires the machine to effectively simulate human behavior and convince the evaluator that it is a human counterpart. While passing the Turing Test isn’t an absolute indicator of true human-like intelligence or consciousness, it serves as a benchmark for evaluating the machine’s ability to exhibit intelligent behavior akin to that of a human.

Today, the CAPTCHA test present on websites these days is a reversed form of the Turing Test where the CAPTCHA is checking whether a human or a BOT is trying to access the site. But it was the Turing Test which first raised important questions about the ability of computers to mimic human thought and behavior.

Brief History

Logically, rapid improvements in AI were driven by massive improvements in computing power and more affordable neural networks. AI is everywhere. From facial and voice recognition on our smartphones where it has been lurking since 2012 and the technology has been advancing in leaps and bounds unnoticed by most of us.  

Some of the quirkier historical highlights of AI include the examples shown in the image below:

Where could AI go in the future?

In the past two years, the emergence and rapid development of AI platforms such as ChatGPT and image generation AIs – provide a glimpse of the impact of AI in human creative processes. Creative professionals such as writers, musicians and artists can have their work (and their likeness) performed by AI platforms easily passing the Turing Test.

Deep fakes and AI software which can imitate your image undetected raises concerns about not only a person’s intellectual property but also their very identity. At the recent Screen Actors Guild strike in Hollywood – it was both writers and actors expressing their very real fear about the potential for AI to take their jobs and undermine the creative process.

Some reasons why AI Is here to stay 

Artificial intelligence is not going away anytime soon, even if we all decide to ditch our smartphones, fitbits and our gaming consoles tomorrow. There are many benefits of AI as well as potential dystopian areas of concern – such as loss of privacy and increased surveillance.  

AI can improve human life by assisting in driving, helping doctors diagnose illnesses as well as assisting specialists making informed decisions. Artificial intelligence and machine learning can process vast amounts of data in seconds. AI can help small businesses maintain business continuity or manage workflows and in solving complex problems and performing repetitive tasks.

AI and ML – making small businesses agile 

Small businesses can remain competitive by incorporating AI into their work processes. Over the last decade alone, many small and medium sized businesses realized that they had to transition to digital work processes using AI and remote work to keep their operations going and to remain competitive with larger players.

Most businesses these days use cloud migration, artificial intelligence, machine learning and analytics – to better manage their workflows. AI can potentially help companies make smarter business decisions by coordinating their data delivery, analyze current business trends and provide forecasts to remove uncertainty for CEOs.

In business, firms with intelligent workflows (using AI and robotic process automation (RPA) are better able to scale quickly and to enhance their network security. According to a recent Deloitte market survey, the demand for robotic process automation is experiencing an extraordinary 20% annual growth and is projected to reach $5 billion by 2024.

The same survey found that the use of robotic process automation can increase a firm’s workforce capacity by approximately 27% without the need to hire additional staff.

Robotic process automation is where software will perform tasks based on a series of rules and examples include voice recognition, digital dictation software as well as Chatbots (used by ecommerce platforms). The rules-based processes in AI interact with the user interface. In this way, computers can now perform work tasks previously carried out by humans, either on or off the cloud.

So, AI can conceivably boost business productivity by up to 40%. Unsurprisingly, there’s been an explosion of AI start-ups since 2000 to meet this need just in the business world.

AI and machine learning for consumers

When we are shopping online, AI chatbots work with Natural Language Processing technology to create a more personalized shopping experience for consumers. This is mostly a good thing and it can remove some of the more dreary customer service exchanges formerly provided by human staff. But there is also Alexa and Amazon’s Echo spying on us in the background (along with our Internet of Things appliances).

Artificial intelligence and machine learning in Healthcare

In medicine and healthcare, AI analyses big data sets and compiles patient data to generate diagnoses or predictions about a person’s health. Proponents argue that AI helps with remote patient monitoring. They also note that healthcare workers can deliver diagnoses much faster. AI also permits patients to attend medical appointments remotely.

Dr Isaac Kohane, head of Harvard Medical School’s Department of Biomedical Informatics, says that AI will make it possible for all medical knowledge in the world to be retrieved instantaneously to solve any type of complex medical cases. Some have argued that AI can help avoid medical errors in diagnosis made by humans which kill an estimated 200,000 people each year.

Wearable healthcare AIs found in our Apple Watch or Fitbit measure our heart rates, blood pressure and count our steps. Yet health monitoring apps and tech raise serious privacy and surveillance concerns about the collection of an individual’s personal data.

Artificial intelligence and machine learning attributes
Image source

The ethical dilemma

Alan Turing (the logical father of AI) summarized the ethical dilemma around AI as early as 1951 when he said:

“It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. They would be able to converse with each other to sharpen their wits. At some stage therefore, we should expect the machines to take control.”

Artificial intelligence and machine learning – Neuralink

The concerns of AI have segued into the dystopian where entrepreneur billionaires like Elon Musk propose inserting AI-chips into the brains of (initially pigs) to create “super human cognition”. Musk’s Neuralink proposal sees artificial intelligence processes incorporated eventually into the human body.

Neuralink involves transplanting a computer chip (with 96 small, polymer threads) each containing 32 electrodes directly into a human brain. The Neuralink chip which is about 8 millimeters in diameter (smaller than the tip of your finger) would be like placing a “fitbit in your skull with little wires”.

The Neuralink can monitor the activity of 1,000 brain neurons. Gertrude, the experimental pig, who received the first Neuralink implant is apparently doing well. And Elon Musk argues that the device can easily be removed. The permission of the pigs: Gertrude and Dorothy; was however never sought.

Artificial intelligence and machine learning – global perspectives

The World Economic Forum (WEF) has argued that AI is here to stay. The WEF believes that pandemic-driven remote work and remote shopping retail have demonstrated the benefits of AI assisting all of these processes.

The World Economic Forum also cites the use of AI in education to support remote learning processes. Appen’s State of AI 2020 Report noted that 41% of businesses sped up their use of AI during the COVID-19 pandemic. 

Clearly there are benefits to using AI. Even if just to make our digital lives easier and more logical. That said, the increasing use of AI, if it continues unchecked, potentially raises concerns about our privacy. (Siri, Alexa, Echo and Google Assistant are always watching and listening from your smart homes).

AI and personal autonomy

There’s no doubt that this overly clever technology could threaten individual freedoms. Authoritarian governments or malevolent corporations using AI could target (and punish) individuals who question the status quo for surveillance purposes.

The presence of AI in so many aspects of our lives from Google searching, shopping, entertainment, banking or managing climate control in our Smart Homes must be balanced against an awareness of what aspects of our privacy and personal autonomy are being sacrificed for this perceived convenience.

Final thoughts

Artificial intelligence covers many helpful applications, including visual perception, speech recognition, reasoning, learning, planning, self-correction, and knowledge representation. AI apps make our work days easier and more efficient by removing tedious tasks which are easily automated.

Robotics and machine learning are subsets of AI and these processes work hand-in-hand in manufacturing processes. On the factory floor, AI combines machine learning with robotic arms to actively manipulate objects on a production line.

In 2023, artificial intelligence is developing rapidly without significant regulation or oversight despite some concerned chatter by interested stakeholders including tech geeks, libertarians and creatives. If you follow the money you’ll see that artificial intelligence will add $15.7 trillion to the global economy before 2030. AI is not going away anytime soon.

Medical self-aware technology like Musk’s Neuralink digital brain implants could diminish a person’s medical sovereignty. Our portable electronic devices (smartphones and watches) along with our smart homes, Android and Apple TVs (with Google Assistant), Alexa and Echo are monitoring our every move and nudging our purchasing and viewing habits.

Developments such as these are important reminders that we must remain vigilant about any potentially dystopian consequences arising from this technology.