“Apocalyptic pronouncements from scientists and entrepreneurs have driven [a] surge in interest” in artificial intelligence, writes The Guardian’s science editor Ian Sample. But is it reasonable to expect that machines will one day willfully turn on their human creators?
In the spirit of cautionary prophesies such as “2001: A Space Odyssey,” “Demon Seed,” “A.I. Artificial Intelligence” and “Her,” Alex Garland’s provocative film raises questions of gender, ethics and existentialism.
Theoretical physicist and all-around genius Stephen Hawking has relied for decades on a kind of artificial intelligence to help him communicate, but that doesn’t mean he endorses AI without reservation.
A program built by Russian experts is the first to pass the Turing Test, which requires that some observers be unable to tell the difference between it and a human—and academics are worried about the implications for cybercrime.
When female writers disappeared from the Wikipedia heading “American novelists,” more than a few eyebrows were raised; Pvt. Bradley Manning being revoked as grand marshal of the San Francisco Gay Pride parade proves that the military-industrial complex rules all; meanwhile, another price-fixing scandal reminiscent of Libor is about to explode. These discoveries and more after the jump.
And now a quote that could come from Dr. Strangelove: “I will stand my artificial intelligence against your human any day of the week and tell you that my A.I. will pay more attention to the rules of engagement and create fewer ethical lapses than a human force.”
Despite the apparent reality that it’s not possible to precisely quantify everything under the sun, particularly when it comes to human behavior, the worrisome trend of “quants”—experts from physics and other scientific fields—infiltrating Wall Street firms to apply their skills to the stock market is still in effect.