OpenAI’s is the latest large language model (LLM) release. GPT-4 surpasses its predecessor in terms of reliability, creativity, and ability to process intricate instructions. It can handle more nuanced prompts compared to previous releases, and is multimodal, meaning it was trained on both images and text. We don’t yet understand its capabilities – yet it has already been deployed to the public. Center for Humane Technology wants to close the gap between what the world hears publicly about AI from splashy CEO presentations and what the people who are closest to the risks and harms inside AI labs are telling. Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. They translated their concerns into a cohesive story and presented the resulting slides to heads of institutions and major media organizations in New York, Washington DC, and San Francisco. The talk can you hear below is the culmination of that work, which is ongoing.
AI may help us achieve major advances like curing cancer or addressing climate change. But the point they’re making is: if our dystopia is bad enough, it won’t matter how good the utopia we want to create. We only get one shot, and we need to move at the speed of getting it right.