Artificial Intelligence (AI) today has become a familiar part of modern life, appearing in every field from business to healthcare. However, few realize that the history of AI development began in the mid-20th century and went through many ups and downs before achieving the explosive breakthroughs we see today.

This article by INVIAI offers a detailed look at the history of AI’s formation and development, from the initial early ideas, through the difficult “AI winters,” to the deep learning revolution and the generative AI wave that exploded in the 2020s.

1950s: The Beginning of Artificial Intelligence

The 1950s are considered the official starting point of the AI field. In 1950, mathematician Alan Turing published the paper “Computing Machinery and Intelligence,” in which he proposed a famous test to evaluate a machine’s ability to think – later known as the Turing Test. This milestone introduced the idea that computers could “think” like humans, laying the theoretical foundation for AI.

By 1956, the term “Artificial Intelligence” (AI) was officially coined. That summer, computer scientist John McCarthy (Dartmouth College), along with colleagues Marvin Minsky, Nathaniel Rochester (IBM), and Claude Shannon, organized a historic workshop at Dartmouth College.

McCarthy proposed the term “artificial intelligence” (AI) for this workshop, and the 1956 Dartmouth event is often regarded as the birth of the AI field. There, bold scientists declared that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”, setting ambitious goals for this new discipline.

The late 1950s saw many early AI achievements. In 1951, early AI programs were written to run on the Ferranti Mark I computer – notably Christopher Strachey’s checkers program and Dietrich Prinz’s chess program, marking the first time computers played intellectual games.

In 1955, Arthur Samuel at IBM developed a checkers program capable of learning from experience, becoming one of the first machine learning systems. Around the same time, Allen Newell, Herbert Simon, and colleagues wrote the Logic Theorist (1956) – a program that could automatically prove mathematical theorems, demonstrating that machines could perform logical reasoning.

Alongside algorithms, specialized AI programming tools and languages also emerged in the 1950s. In 1958, John McCarthy invented the Lisp programming language – designed specifically for AI, quickly becoming popular among AI developers. That same year, psychologist Frank Rosenblatt introduced the Perceptron – the first artificial neural network model capable of learning from data. The Perceptron laid the groundwork for modern neural networks.

In 1959, Arthur Samuel first used the term “machine learning” in a landmark paper describing how computers could be programmed to learn and improve their game-playing ability beyond their original programming. These developments reflected strong optimism: pioneers believed that within a few decades, machines could achieve human-like intelligence.

thap-nien-1950-khoi-dau-cua-tri-tue-nhan-tao

1960s: Early Progress

Entering the 1960s, AI continued to develop with many notable projects and inventions. AI laboratories were established at prestigious universities (MIT, Stanford, Carnegie Mellon...), attracting research interest and funding. Computers became more powerful, allowing experimentation with more complex AI ideas than in the previous decade.

A major milestone was the creation of the first chatbot program. In 1966, Joseph Weizenbaum at MIT created ELIZA, a program simulating conversation with users in the style of a psychotherapist. ELIZA was programmed simply (based on keyword recognition and scripted responses), but surprisingly many people believed ELIZA truly “understood” and had emotions. ELIZA’s success paved the way for modern chatbots and raised questions about humans’ tendency to attribute emotions to machines.

At the same time, the first intelligent robots appeared. From 1966 to 1972, Stanford Research Institute (SRI) developed Shakey – the first mobile robot capable of self-awareness and action planning rather than just following simple commands. Shakey was equipped with sensors and cameras to navigate its environment and could analyze tasks into basic steps such as pathfinding, pushing obstacles, climbing slopes, etc. This was the first system to fully integrate computer vision, natural language processing, and planning in a robot, laying the foundation for later AI robotics.

The American Association of Artificial Intelligence (AAAI) was also founded during this period (originating from the IJCAI 1969 conference and formally established in 1980) to bring together AI researchers, reflecting the growing AI community.

Additionally, the 1960s saw the development of expert systems and foundational algorithms. In 1965, Edward Feigenbaum and colleagues developed DENDRAL – considered the world’s first expert system. DENDRAL was designed to assist chemists in analyzing molecular structures from experimental data by simulating expert knowledge and reasoning. Its success demonstrated that computers could help solve complex specialized problems, laying the groundwork for the expert system boom in the 1980s.

Furthermore, the Prolog programming language (specialized for logical AI) was developed in 1972 at the University of Marseille, opening the path for logic- and rule-based AI approaches. Another key milestone was in 1969, when Marvin Minsky and Seymour Papert published “Perceptrons”. This book highlighted the mathematical limitations of single-layer perceptron models (unable to solve simple XOR problems), causing serious skepticism about the neural network field.

Many funders lost faith in neural network learning, and neural network research gradually declined by the late 1960s. This marked the first sign of an “AI winter” after more than a decade of optimism.

1960s-Early Progress

1970s: Challenges and the First “AI Winter”

In the 1970s, AI faced real-world challenges: many high expectations from the previous decade were unmet due to limitations in computing power, data, and scientific understanding. As a result, confidence and funding for AI sharply declined by the mid-1970s – a period later called the first “AI winter”.

In 1973, Sir James Lighthill added fuel to the fire by publishing a report titled “Artificial Intelligence: A General Survey” that critically assessed AI research progress. The Lighthill Report concluded that AI researchers had “promised too much but delivered too little”, especially criticizing computers’ inability to understand language or vision as expected.

This report led the UK government to cut most AI funding. In the US, agencies like DARPA shifted investments to more practical projects. Consequently, from the mid-1970s to early 1980s, AI research nearly froze, with few breakthroughs and severe funding shortages. This was the AI winter – a term coined in 1984 to describe this prolonged “freeze” in AI research.

Despite difficulties, the 1970s had some bright spots in AI research. Expert systems continued to develop academically, notably MYCIN (1974) – a medical expert system built by Ted Shortliffe at Stanford to diagnose blood infections. MYCIN used inference rules to provide treatment recommendations with high accuracy, demonstrating the practical value of expert systems in narrow domains.

Meanwhile, the Prolog language (introduced in 1972) began to be applied in language processing and logic problem-solving, becoming an important tool for logic-based AI. In robotics, in 1979 a Stanford research team successfully developed the Stanford Cart – the first robot vehicle to autonomously navigate a room full of obstacles without remote control. Though modest, this achievement laid the foundation for later self-driving car research.

Overall, by the late 1970s, AI research entered a lull. Many AI scientists had to shift focus to related fields such as machine learning, statistics, robotics, and computer vision to continue their work.

AI was no longer the “shining star” of the previous decade but became a narrow field with few notable advances. This period reminded researchers that artificial intelligence is far more complex than initially thought, requiring fundamentally new approaches beyond simple reasoning models.

1970s-Challenges and the First AI Winter

1980s: Expert Systems – Rise and Decline

By the early 1980s, AI entered a renaissance period – sometimes called the “AI renaissance”. This revival was driven by the commercial success of expert systems and renewed investment interest from governments and businesses. Computers became more powerful, and the community believed that AI ideas could gradually be realized in narrow domains.

A major driver was commercial expert systems. In 1981, Digital Equipment Corporation deployed XCON (Expert Configuration) – an expert system that helped configure computer systems, saving the company tens of millions of dollars. XCON’s success sparked a wave of expert system development in enterprises to support decision-making. Many tech companies invested in creating expert system shells so businesses could customize their own systems.

The Lisp language also moved beyond the lab with the emergence of Lisp machines – specialized hardware optimized for running AI programs. In the early 1980s, a series of Lisp machine startups (Symbolics, Lisp Machines Inc.) emerged, creating an investment boom and ushering in the “Lisp machine era” for AI.

Major governments also heavily funded AI during this period. In 1982, Japan launched the Fifth Generation Computer Project with a budget of $850 million to develop intelligent computers using logic and Prolog. Similarly, the US (DARPA) increased AI research funding amid technological competition with Japan. These projects focused on expert systems, natural language processing, and knowledge bases, aiming to create advanced intelligent computers.

Amid this new optimism, the field of artificial neural networks quietly revived. In 1986, researcher Geoffrey Hinton and colleagues published the Backpropagation algorithm – an effective method for training multi-layer neural networks, overcoming the limitations highlighted in the 1969 Perceptrons book.

Although the backpropagation principle was outlined in 1970, it was only in the mid-1980s that it was fully exploited thanks to increased computing power. The backpropagation algorithm quickly sparked a second wave of neural network research. At this time, belief grew that deep neural networks could learn complex models, laying the groundwork for later deep learning.

Young researchers like Yann LeCun (France) and Yoshua Bengio (Canada) joined the neural network movement, successfully developing handwriting recognition models by the late 1980s.

However, the second AI boom was short-lived. By the late 1980s, AI again fell into crisis due to disappointing results. Expert systems, while useful in narrow applications, revealed limitations: they were rigid, hard to scale, and required continuous manual knowledge updates.

Many large expert system projects failed, and the Lisp machine market collapsed due to competition from cheaper personal computers. By 1987, the Lisp industry was nearly bankrupt. AI investment was sharply cut again in the late 1980s, leading to a second “AI winter”. The term “AI winter”, coined in 1984, proved apt as many AI companies closed in 1987–1988. Once again, AI entered a downturn cycle, forcing researchers to adjust expectations and strategies.

In summary, the 1980s marked a cycle of boom and bust for AI. Expert systems helped AI enter industrial applications for the first time but also exposed the limitations of rule-based approaches. Nevertheless, this period produced valuable ideas and tools: from neural network algorithms to early knowledge bases. Important lessons about avoiding overhype were learned, setting the stage for a more cautious approach in the following decade.

1980s-Expert Systems – Rise and Decline

1990s: AI Returns to Practicality

After the late 1980s AI winter, confidence in AI gradually recovered in the 1990s thanks to a series of practical advances. Instead of focusing on ambitious strong AI (general artificial intelligence), researchers concentrated on weak AI – applying AI techniques to specific problems where they began to show impressive results. Many AI subfields spun off from earlier research (such as speech recognition, computer vision, search algorithms, knowledge bases) and developed independently with wide applications.

A key milestone marking practical success was in May 1997, when IBM’s Deep Blue defeated world chess champion Garry Kasparov in an official match. This was the first time an AI system beat a world champion in a complex intellectual game, causing a media sensation.

Deep Blue’s victory – based on brute-force search algorithms combined with an extensive opening database – demonstrated the enormous computing power and specialized techniques that could help machines surpass humans in well-defined tasks. This event marked AI’s spectacular return to the spotlight, reigniting research enthusiasm after years of dormancy.

Beyond chess, 1990s AI made progress on many fronts. In gaming, the 1994 program Chinook solved the game of checkers at a level considered unbeatable, forcing the world champion to concede defeat.

In speech recognition, commercial systems like Dragon Dictate (1990) emerged, and by the late decade, voice recognition software was widely used on personal computers. Handwriting recognition was also integrated into PDAs (personal digital assistants) with increasing accuracy.

Machine vision applications began to be deployed in industry, from component inspection to security systems. Even machine translation – a field that had frustrated AI researchers since the 1960s – made notable progress with systems like SYSTRAN supporting multilingual automatic translation for the European Union.

Another important direction was statistical machine learning and neural networks applied to large-scale data mining. The late 1990s saw the Internet boom, generating massive digital data. Techniques like data mining and machine learning algorithms such as decision trees, neural networks, and hidden Markov models were used to analyze web data, optimize search engines, and personalize content.

The term “data science” was not yet popular, but in practice AI had already penetrated software systems to improve performance by learning from user data (e.g., email spam filters, product recommendations in e-commerce). These small but practical successes helped restore AI’s credibility with businesses and society.

It can be said that the 1990s was a period when AI quietly but steadily entered everyday life. Instead of grand claims of human-like intelligence, developers focused on solving specialized problems. As a result, AI was present in many late 20th-century technology products often unnoticed by users – from games and software to electronic devices. This period also laid important foundations in data and algorithms, preparing AI for the explosive growth in the next decade.

1990s-AI Returns to Practicality

2000s: Machine Learning and the Big Data Era

Entering the 21st century, AI transformed dramatically thanks to the Internet and the big data era. The 2000s witnessed the explosion of personal computers, the Internet, and sensor devices, generating enormous amounts of data. Machine learning – especially supervised learning methods – became the main tool to exploit this “data goldmine.”

The slogan “data is the new oil” became popular because the more data available, the more accurate AI algorithms could learn. Major tech companies began building systems to collect and learn from user data to improve products: Google with smarter search engines, Amazon with behavior-based shopping recommendations, Netflix with movie suggestion algorithms. AI gradually became the silent “brain” behind digital platforms.

In 2006, a landmark event occurred: Fei-Fei Li, a professor at Stanford University, initiated the ImageNet project – a massive database of over 14 million images with detailed labels. Introduced in 2009, ImageNet quickly became the standard dataset for training and evaluating computer vision algorithms, especially for object recognition in images.

ImageNet was likened to a “doping boost” that propelled deep learning research later, by providing sufficient data for training complex deep models. The annual ImageNet Challenge from 2010 onwards became a key battleground where research teams competed to develop the best image recognition algorithms. From this platform, a historic AI breakthrough would occur in 2012 (see the 2010s section).

Also in the 2000s, AI achieved many notable application milestones:

  • In 2005, the Stanford self-driving car (nicknamed “Stanley”) won the DARPA Grand Challenge – a 212 km desert autonomous vehicle race. Stanley completed the course in 6 hours 53 minutes, ushering in a new era for self-driving cars and attracting major investments from Google and Uber in subsequent years.
  • Virtual assistants on phones emerged: in 2008, the Google Voice Search app enabled voice search on the iPhone; and the pinnacle was Apple Siri (launched 2011) – a voice-controlled virtual assistant integrated into the iPhone. Siri used speech recognition, natural language understanding, and web services to respond to users, marking AI’s first large-scale public adoption.
  • In 2011, supercomputer IBM Watson defeated two champions on the American TV quiz show Jeopardy!. Watson could understand complex English questions and retrieve vast data to find answers, demonstrating AI’s strength in natural language processing and information retrieval. This victory proved that computers could “understand” and respond intelligently in a broad knowledge domain.
  • Social networks and the web: Facebook introduced automatic face recognition tagging (around 2010), using machine learning on user photo data. YouTube and Google used AI to filter content and recommend videos. Machine learning techniques quietly operated behind the scenes, helping optimize user experience often without users’ awareness.

It can be said that the main driver of AI in the 2000s was data and applications. Traditional machine learning algorithms like regression, SVM, decision trees, etc., were deployed at scale, delivering practical effectiveness.

AI shifted from a research topic to industrial adoption: “AI for business” became a hot topic, with many companies offering AI solutions in management, finance, marketing, and more. In 2006, the term “enterprise AI” emerged, emphasizing AI’s application to improve business efficiency and decision-making.

The late 2000s also saw the early signs of the deep learning revolution. Research on multi-layer neural networks continued to bear fruit. In 2009, Andrew Ng’s team at Stanford announced using GPUs (graphics processing units) to train neural networks 70 times faster than conventional CPUs.

The parallel computing power of GPUs was naturally suited for matrix calculations in neural networks, paving the way for training large deep learning models in the 2010s. The final pieces – big data, powerful hardware, improved algorithms – were ready, just waiting for the right moment to ignite a new AI revolution.

2000s-Machine Learning and the Big Data Era

2010s: The Deep Learning Revolution

If there is one period when AI truly “took off”, it was the 2010s. Building on the data and hardware foundations of the previous decade, artificial intelligence entered the deep learning era – multi-layer neural network models achieved breakthrough results, breaking all records across a wide range of AI tasks. The dream of machines “learning like the human brain” partially became reality through deep learning algorithms.

The historic turning point came in 2012, when Geoffrey Hinton’s team and students (Alex Krizhevsky, Ilya Sutskever) entered the ImageNet Challenge. Their model – commonly called AlexNet – was an 8-layer convolutional neural network trained on GPUs. AlexNet achieved outstanding accuracy, halving the error rate compared to the second-place team.

This overwhelming victory stunned the computer vision community and marked the start of the “deep learning craze” in AI. In the following years, most traditional image recognition methods were replaced by deep learning models.

AlexNet’s success confirmed that with enough data (ImageNet) and computing power (GPUs), deep neural networks could outperform other AI techniques. Hinton and colleagues were quickly recruited by Google, and deep learning became the hottest keyword in AI research from then on.

Deep learning revolutionized not only computer vision but also spread to speech processing, language, and many other fields. In 2012, Google Brain (a project by Andrew Ng and Jeff Dean) made waves by publishing a deep neural network that learned to recognize the concept of “cat” from unlabeled YouTube videos.

Between 2011 and 2014, virtual assistants like Siri, Google Now (2012), and Microsoft Cortana (2014) were launched, leveraging advances in speech recognition and natural language understanding. For example, Microsoft’s speech recognition system reached human-level accuracy by 2017, largely thanks to deep neural networks modeling audio. In translation, Google Translate switched to neural machine translation (NMT) architecture in 2016, significantly improving quality over statistical models.

Another major milestone was AI’s victory in the game of Go – once considered far beyond reach. In March 2016, DeepMind’s AlphaGo defeated world Go champion Lee Sedol 4-1. Go is far more complex than chess, with too many possible moves for brute-force search. AlphaGo combined deep learning and Monte Carlo Tree Search algorithms, learning from millions of human games and self-play.

This victory was compared to Deep Blue’s 1997 match, confirming that AI could surpass humans in domains requiring intuition and experience. After AlphaGo, DeepMind developed AlphaGo Zero (2017), which learned Go entirely from scratch without human data and still defeated the previous version 100-0. This demonstrated the potential of reinforcement learning combined with deep learning for superhuman performance.

Also in 2017, a breakthrough in natural language processing emerged: the Transformer architecture. Google researchers published the Transformer model in the paper “Attention Is All You Need”, proposing a self-attention mechanism that allowed models to learn relationships between words in a sentence without sequential processing.

Transformer enabled training of large language models (LLMs) much more efficiently than previous sequential architectures (RNN/LSTM). Since then, a series of improved language models based on Transformer appeared: BERT (Google, 2018) for contextual understanding, and notably GPT (Generative Pre-trained Transformer) by OpenAI, first introduced in 2018.

These models achieved outstanding results in language tasks from classification and question answering to text generation. Transformer laid the foundation for the race to build massive language models in the 2020s.

The late 2010s also saw the emergence of generative AI – AI models capable of creating new content. In 2014, Ian Goodfellow and colleagues invented the GAN (Generative Adversarial Network), consisting of two opposing neural networks generating realistic synthetic data.

GANs quickly became famous for their ability to generate highly realistic fake human portraits (deepfakes). Alongside, variational autoencoders (VAE) and style transfer networks were developed, enabling image and video transformation into new artistic styles.

By 2019, OpenAI introduced GPT-2 – a 1.5 billion parameter text generation model notable for producing fluent, human-like paragraphs. Clearly, AI was no longer just for classification or prediction, but could creatively generate content convincingly.

AI in the 2010s made leaps beyond expectations. Many tasks once considered “impossible” for computers were now achieved or surpassed human levels: image recognition, speech recognition, translation, complex games...

More importantly, AI began permeating everyday life: from smartphone cameras automatically recognizing faces, virtual assistants in smart speakers (Alexa, Google Home), to content recommendations on social media all powered by AI. This was truly the AI explosion era, leading many to liken “AI to the new electricity” – a foundational technology transforming every industry.

2010s-The Deep Learning Revolution

2020s: The Generative AI Boom and New Trends

In just the first few years of the 2020s, AI has exploded at an unprecedented pace, mainly driven by the rise of generative AI and large language models (LLMs). These systems have enabled AI to reach hundreds of millions of users directly, sparking a wave of creative applications and widespread social discussions about AI’s impact.

In June 2020, OpenAI introduced GPT-3 – a massive language model with 175 billion parameters, ten times larger than the previous largest model. GPT-3 amazed by its ability to write paragraphs, answer questions, compose poetry, and code almost like a human, though it still made factual errors. GPT-3’s power showed that model scale combined with massive training data could produce unprecedented language fluency. Applications based on GPT-3 quickly emerged, from marketing content generation and email assistants to programming support.

By November 2022, AI truly entered the public spotlight with the launch of ChatGPT – an interactive chatbot developed by OpenAI based on the GPT-3.5 model. In just 5 days, ChatGPT reached 1 million users, and within 2 months surpassed 100 million users, becoming the fastest-growing consumer app in history.

ChatGPT can fluently answer a wide range of questions, from drafting texts and solving math problems to providing advice, astonishing users with its “intelligence” and flexibility. Its popularity marked AI’s first widespread use as a creative content tool and kicked off the AI race among major tech giants.

Early in 2023, Microsoft integrated GPT-4 (OpenAI’s next model) into its Bing search engine, while Google launched the Bard chatbot using its own LaMDA model. This competition has helped generative AI technology reach wider audiences and improve rapidly.

Beyond text, generative AI in image and audio domains has also advanced dramatically. In 2022, text-to-image models like DALL-E 2 (OpenAI), Midjourney, and Stable Diffusion allowed users to input text prompts and receive AI-generated images. The image quality is so vivid and creative it’s hard to believe, ushering in a new era of digital content creation.

However, this also raises challenges around copyright and ethics, as AI learns from artists’ works and produces similar outputs. In audio, new-generation text-to-speech models can convert text into voices indistinguishable from real humans, even imitating famous voices, raising concerns about voice deepfakes.

In 2023, for the first time, lawsuits over AI training data copyrights emerged – for example, Getty Images sued Stability AI (developer of Stable Diffusion) for using millions of copyrighted images without permission. This highlights the dark side of the AI boom: legal, ethical, and social issues are surfacing, demanding serious attention.

Amid the AI frenzy, 2023 saw experts express concerns about the risks of powerful AI. Over 1,000 figures in tech (including Elon Musk, Steve Wozniak, AI researchers, etc.) signed an open letter calling for a 6-month pause on training AI models larger than GPT-4, fearing uncontrolled rapid development.

That year, pioneers like Geoffrey Hinton (a “father” of deep learning) also warned about the dangers of AI escaping human control. The European Commission quickly finalized the AI Act (EU AI Act) – the world’s first comprehensive AI regulation, expected to take effect in 2024. This law bans AI systems deemed “unacceptable risk” (such as mass surveillance, social scoring) and requires transparency for general AI models.

In the US, several states enacted laws limiting AI use in sensitive areas (recruitment, finance, election campaigning, etc.). Clearly, the world is rushing to establish legal and ethical frameworks for AI, an inevitable step as the technology’s impact deepens.

Overall, the 2020s are witnessing an AI explosion both technically and socially. New-generation AI tools like ChatGPT, DALL-E, Midjourney, etc., have become familiar, helping millions create and work more efficiently in unprecedented ways.

At the same time, the investment race in AI is heating up: forecasts predict enterprise spending on generative AI will exceed $1 billion in coming years. AI is also penetrating deeper into sectors like healthcare (supporting medical imaging diagnosis, drug discovery), finance (risk analysis, fraud detection), education (virtual tutors, personalized learning content), transportation (advanced self-driving cars), defense (tactical decision-making), and more.

It can be said that AI today is like electricity or the Internet – a technological infrastructure every business and government wants to harness. Many experts are optimistic that AI will continue delivering leaps in productivity and quality of life if developed and managed responsibly.

2020s-The Generative AI Boom and New Trends


From the 1950s to today, the history of AI development has been an astonishing journey – full of ambition, disappointment, and resurgence. From the small 1956 Dartmouth workshop that laid the foundation, AI has twice fallen into “AI winters” due to overhyped expectations, but each time rebounded stronger thanks to scientific and technological breakthroughs. Especially in the past 15 years, AI has advanced dramatically, truly stepping out of the lab into the real world and making a profound impact.

Currently, AI is present in almost every field and becoming smarter and more versatile. However, the goal of strong AI (general artificial intelligence) – a machine with flexible intelligence like humans – remains ahead.

Today’s AI models are impressive but still excel only within their trained tasks and sometimes make silly mistakes (such as ChatGPT “hallucinating” false information with high confidence). The challenges of safety and ethics also demand urgent attention: how to develop AI with control, transparency, and for the common good of humanity.

The next chapter of AI promises to be extremely exciting. With current momentum, we can expect AI to penetrate even deeper into life: from AI doctors assisting healthcare, AI lawyers researching legal texts, to AI companions supporting learning and emotional well-being.

Technologies like neuromorphic computing are being researched to mimic brain architecture, potentially creating a new generation of AI that is more efficient and closer to natural intelligence. Although the prospect of AI surpassing human intelligence remains controversial, it is clear that AI will continue evolving and profoundly shape humanity’s future.

Looking back at the history of AI’s formation and development, we see a story of human perseverance and endless creativity. From primitive calculators that only computed, humans have taught machines to play games, drive cars, recognize the world, and even create art. Artificial intelligence has been, is, and will continue to be a testament to our ability to transcend limits.

The important lesson from history is to set realistic expectations and develop AI responsibly – ensuring AI brings maximum benefit to humanity in the journeys ahead.