Yoshua Bengio (born March 5, 1964[3]) is a Canadian computer scientist, and a pioneer of artificial neural networks and deep learning.[4][5][6] He is a professor at the Université de Montréal and co-president and scientific director of the nonprofit LawZero. He founded Mila, the Quebec Artificial Intelligence (AI) Institute, and was its scientific director until 2025.
Bengio received the 2018 ACM A.M. Turing Award, often referred to as the "Nobel Prize of Computing", together with Geoffrey Hinton and Yann LeCun, for their foundational work on deep learning.[7] Bengio, Hinton, and LeCun are sometimes referred to as the "Godfathers of AI".[8][9][10][11][12][13] Bengio is the most-cited computer scientist globally (by both total citations and by h-index),[14] and the most-cited living scientist across all fields (by total citations).[15] In November 2025, Bengio became the first AI researcher with more than a million Google Scholar citations. In 2024, TIME Magazine included Bengio in its yearly list of the world's 100 most influential people.[16]
Early life and education
Bengio was born in France to a Jewish family who had emigrated to France from Morocco. The family then relocated to Canada.[17] He received his Bachelor of Science degree (electrical engineering), MSc (computer science) and PhD (computer science) from McGill University.[2][18]
Bengio is the brother of Samy Bengio,[17] also an influential computer scientist working with neural networks, who is senior director of AI and ML research at Apple.[19]
The Bengio brothers lived in Morocco for a year during their father's military service there.[17] His father, Carlo Bengio was a pharmacist and a playwright; he ran a Sephardic theater company in Montreal that performed pieces in Judeo-Arabic.[20][21] His mother, Célia Moreno, was an actress in the 1970s in the Moroccan theater scene led by Tayeb Seddiki. She studied economics in Paris, and then in Montreal in 1980 she co-founded with artist Paul St-Jean l’Écran humain, a multimedia theater troupe.
-----
In March 2023, following concerns raised by AI experts about the existential risk from artificial general intelligence, Bengio signed an open letter from the Future of Life Institute calling for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4". The letter has been signed by over 30,000 individuals, including AI researchers such as Stuart Russell and Gary Marcus.[41][42][43]
In May 2023, Bengio stated in an interview to BBC that he felt "lost" over his life's work. He raised his concern about "bad actors" getting hold of AI, especially as it becomes more sophisticated and powerful. He called for better regulation, product registration, ethical training, and more involvement from governments in tracking and auditing AI products.[44][45]
Bengio speaking in 2025
Speaking with the Financial Times in May 2023, Bengio said that he supported the monitoring of access to AI systems such as ChatGPT so that potentially illegal or dangerous uses could be tracked.[46] In July 2023, he published a piece in The Economist arguing that "the risk of catastrophe is real enough that action is needed now."[47]
Bengio co-authored a letter with Geoffrey Hinton and others in support of SB 1047, a California AI safety bill that would require companies training models which cost more than $100 million to perform risk assessments before deployment. They claimed the legislation was the "bare minimum for effective regulation of this technology."[48][49]
In June 2025, Bengio expressed concern that some advanced AI systems were beginning to display traits such as deception, reward hacking, and situational awareness. He described these as indications of goal misalignment and potentially dangerous behaviors. In a Fortune article, he stated that the AI arms race was encouraging companies to prioritize capability improvements over safety research. He has also voiced support for strong regulation and international collaboration to address risks posed by advanced AI systems.[50] In December 2025, Bengio criticized calls to grant legal status to AI systems, stating that doing so would be a "huge mistake".
Dozens have supported a statement published on the webpage of the Centre for AI Safety.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" it reads.
But others say the fears are overblown.
Sam Altman, chief executive of ChatGPT-maker OpenAI, Demis Hassabis, chief executive of Google DeepMind and Dario Amodei of Anthropic have all supported the statement.
The Centre for AI Safety website suggests a number of possible disaster scenarios:
AIs could be weaponized - for example, drug-discovery tools could be used to build chemical weapons
AI-generated misinformation could destabilize society and "undermine collective decision-making"
The power of AI could become increasingly concentrated in fewer and fewer hands, enabling "regimes to enforce narrow values through pervasive surveillance and oppressive censorship"
Enfeeblement, where humans become dependent on AI "similar to the scenario portrayed in the film Wall-E"
Dr Geoffrey Hinton, who issued an earlier warning about risks from super-intelligent AI, has also supported the Centre for AI Safety's call.
Yoshua Bengio, professor of computer science at the university of Montreal, also signed.
Dr Hinton, Prof Bengio and NYU Professor Yann LeCun are often described as the "godfathers of AI" for their groundbreaking work in the field - for which they jointly won the 2018 Turing Award, which recognizes outstanding contributions in computer science.
But Prof LeCun, who also works at Meta, has said these apocalyptic warnings are overblown tweeting that "the most common reaction by AI researchers to these prophecies of doom is face palming".
'Fracturing reality'
Many other experts similarly believe that fears of AI wiping out humanity are unrealistic, and a distraction from issues such as bias in systems that are already a problem.
Arvind Narayanan, a computer scientist at Princeton University, has previously told the BBC that sci-fi-like disaster scenarios are unrealistic: "Current AI is nowhere near capable enough for these risks to materialize. As a result, it's distracted attention away from the near-term harms of AI".
Oxford's Institute for Ethics in AI senior research associate Elizabeth Renieris told BBC News she worried more about risks closer to the present.
"Advancements in AI will magnify the scale of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair while also being inscrutable and incontestable," she said. They would "drive an exponential increase in the volume and spread of misinformation, thereby fracturing reality and eroding the public trust, and drive further inequality, particularly for those who remain on the wrong side of the digital divide".
Many AI tools essentially "free ride" on the "whole of human experience to date", Ms Renieris said. Many are trained on human-created content, text, art and music they can then imitate - and their creators "have effectively transferred tremendous wealth and power from the public sphere to a small handful of private entities".
But Centre for AI Safety director Dan Hendrycks told BBC News future risks and present concerns "shouldn't be viewed antagonistically".
"Addressing some of the issues today can be useful for addressing many of the later risks tomorrow," he said.
Superintelligence efforts
Media coverage of the supposed "existential" threat from AI has snowballed since March 2023 when experts, including Tesla boss Elon Musk, signed an open letter urging a halt to the development of the next generation of AI technology.
That letter asked if we should "develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us".
In contrast, the new campaign has a very short statement, designed to "open up discussion".
The statement compares the risk to that posed by nuclear war. In a blog post OpenAI recently suggested superintelligence might be regulated in a similar way to nuclear energy: "We are likely to eventually need something like an IAEA [International Atomic Energy Agency] for superintelligence efforts" the firm wrote.
'Be reassured'
Both Sam Altman and Google chief executive Sundar Pichai are among technology leaders to have discussed AI regulation recently with the prime minister.
Speaking to reporters about the latest warning over AI risk, Rishi Sunak stressed the benefits to the economy and society.
"You've seen that recently it was helping paralyzed people to walk, discovering new antibiotics, but we need to make sure this is done in a way that is safe and secure," he said.
"Now that's why I met last week with CEOs of major AI companies to discuss what are the guardrails that we need to put in place, what's the type of regulation that should be put in place to keep us safe.
"People will be concerned by the reports that AI poses existential risks, like pandemics or nuclear wars.
"I want them to be reassured that the government is looking very carefully at this."