Tickets: Student tickets are £15, standard tickets are £20 (including Eventbrite fees), available HERE

 

Directions and parking

Information re venue location, parking, and layout may be found HERE.

Agenda

18:30 Pull up a groove and get fabulous! (a.k.a. registration – please arrive early)

19:00 Scheduled talk: "A Model of General Intelligence"

20:00 Pizza pizza pizza pizza pizza! (Stimulating conversation encouraged…)

21:00 Event close – you don’t have to go home, but you can’t stay here!

Motivation

On Thursday 25 May, Professor Geoffrey Hinton gave his first public lecture since leaving Google due to his concerns about AI safety. The lecture, entitled “Two Paths to Intelligence" — which was sold out — was held at the Cambridge University Engineering Department, although people could also attend via Zoom.
 
During the talk, Professor Hinton explained that, for most of his professional life, he had been of the opinion that highly-energy-efficient analog learners (such as the human brain) were better learners than their much-less-energy-efficient digital counterparts (such as artificial neural networks). Consequently he was (for many decades) of the opinion that super-intelligent AI, although a longer-term possibility, was still many decades away, and thus nothing to worry about (at least not at the moment).
 
Hinton's recent work (in collaboration with other researchers), however, as described in this paper, has now led him to change his mind — he now believes that digital learners are superior to analog learners. Accordingly, he now believes that his earlier conclusion about super-intelligent AI was incorrect. This reversal of opinion, combined with the recent release (and apparent abilities) of ChatGPT, have led Hinton to conclude that, in fact, super-intelligent AI — and all the attendant concerns about AI safety — may potentially be as few as 5-20 years away. Not being an expert in AI safety (it’s not his primary field), Hinton confesses to not having any solutions to the "AI control problem", and is therefore very worried.
 
At an AI Safety social (organised by the Cambridge Existential Risks Initiative) immediately following Hinton’s talk, most people who were asked judged ChatGPT and similar systems to be at (roughly) AGI K on this graph (taken from this draft paper):
 
Screenshot 2023-05-26 at 08.27.23.png
 
The informal consensus of opinion at the CERI AI safety social seemed to be that (a) current AI/AGI technology (such as ChatGPT and its cousins) will now inexorably progress — over a relatively small number of years — through AGI M, AGI N, and ultimately AGI O, and (b) at some point during this progression, one or more such AGIs will almost certainly escape confinement and (being at least near-super-intelligent) trigger the long-feared AI apocalypse (or, if we’re lucky, a near miss). To be clear, the informal consensus of opinion at the 25 May AI safety social appeared to be that the AI apocalypse is now more-or-less unstoppable, exactly as predicted by AI safety expert Eliezer Yudkowsky in a recent podcast entitled We’re All Gonna Die.
 
That said, not everybody in the AI world believes the worst-case "AI apocalypse" scenario to be inevitable; for some, it's merely a plausible possibility that we must therefore seek to avoid with exquisite care. Either way, it is extremely important that AGI is now part of the wider conversation on AI. Simply stated, AGI is the new AI — this is where the technology is now heading.
 
This month's speaker — Professor Pei Wang — has been a world-leader in the AGI field for over 30 years.
 

About the talk

Title: "A Model of General Intelligence"

Abstract:

An AGI system, NARS, will be introduced in the talk. The work is based on the understanding of "intelligence" as adaptation with insufficient knowledge and resources. NARS (Non-Axiomatic Reasoning System) has an experience-grounded semantics, uses a concept-centered knowledge representation, and uniformly carries out various types of reasoning (deduction, induction, abduction, analogy, revision, ...) that also accomplish learning, planning, perceiving, and other cognitive functions. NARS can solve problems in a case-by-case manner without problem-specific algorithms by dynamically allocating computational resources. NARS is implemented as an open-source project and partial results have been used in commercial software.

About the speaker

Pei Wang received his Bachelor of Science degree and Master of Science degree in Computer Science from Peking University in 1983 and 1986, respectively, and his Doctor of Philosophy degree in Computer Science and Cognitive Science from Indiana University, Bloomington, in 1995. He is an Associate Professor of Instruction in the Department of Computer and Information Sciences of Temple University. His research is interdisciplinary, including artificial intelligence, psychology, linguistics, logic, and philosophy. He has been designing and developing a model of intelligence, NARS (Non-Axiomatic Reasoning System) over the decades. He is the founding Chief Executive Editor of the Journal of Artificial General Intelligence and the Vice Chair of the Artificial General Intelligence Society.

About CAIS

Cambridge AI Social (CAIS) seeks to deliver a series of in-person “AI + pizza” events (CAIS Lectures) in Cambridge (one of Europe’s hottest AI hubs!)

Fundamental to the CAIS vision are:

  • speakers must be a recognised world leader in their field
  • events are non-profit, with minimal cost to attendees
  • each event comprises a roughly one-hour talk followed by socialisation (with pizza!) 

Sponsors

CAIS would not be possible without the support of its sponsors:

Please note

  • This is an in-person event (audience and speaker), and will not be livestreamed.
  • A recording of the talk, and images of the event, will be captured as part of the event.
  • Your personal image, voice, etc may be among the audio, video, and images captured.
  • By attending an event you are granting us permission to use these recordings/images.
  • These recordings (audio and video) and images may later be disseminated by CAIS.
  • For example, CAIS may post images and event videos to its website, and to YouTube.
  • In particular, CAIS may license the event video to e.g. commercial VoD channels.
  • In order to qualify for a student ticket, you must be enrolled on a full-time course
  • ... please bring your student ID to the event so that the registration desk can check it!
  • We may sometimes need to find a replacement speaker (e.g. due to a cancellation)
  • ... unfortunately, in this eventuality, we will be unable to provide refunds.
  • CAIS events are held in shared indoor spaces such as auditoria, canteens, etc.
  • Do not attend if you have symptoms of a communicable disease (e.g. COVID, cold, flu).
  • We cannot be held liable should you contract a communicable disease at a CAIS event.
  • Service animals only – please do not bring any dogs/pets/emotional support animals.
  • We reserve the right to refuse admission to anyone for any reason.
  • Any ticket holder refused admission will be offered a partial refund.
  • Ticket holders will be fully refunded should an entire event be cancelled.