January 12, 2015
How AI could ruin humanity, according to smart humans
by Laura Domela

For the past 24 hours, scientists have been lining up to sign this open letter. Put simply, the proposal urges that humanity dedicate a portion of its AI research to “aligning with human interests.” In other words, let’s try to avoid creating our own, mechanized Horsemen of the Apocalypse.
While some scientists might roll their eyes at any mention of a Singularity, plenty of experts and technologists—like, say, Stephen Hawking and Elon Musk—have warned of the dangers AI could pose to our future. But while they might urge us to pursue our AI-related studies with caution, they’re a bit less clear on what exactly it is we’re being cautious against. Thankfully, others have happily filled in those gaps. Here are five of the more menacing destruction-by-singularity prophecies our brightest minds have warned against.
Continue reading at Gizmodo
Related

Just in time for Halloween and the presidential election, MIT scientists have created an AI that makes images spooky. Think of it like filter app Prisma, but with a more sinister spin. A three-person team created an algorithm called Nightmare Machine that generates Halloween-inspired images from pictures of faces and…

Researchers like Raul Rabadan, a theoretical physicist working in biology at Columbia University, want to understand how viruses that ordinarily infect birds or pigs suddenly jump to humans and then become easily transmissible: “What are the specific mutations that contribute to a virus becoming a human pathogen?” he explained. Traditionally, answering…

An artificial intelligence "judge" that can accurately predict many of Europe's top human rights court rulings has been created by a team of computer scientists and legal experts. The AI system—developed by researchers from University College London, the University of Sheffield, and the University of Pennsylvania—parsed 584 cases which had…