fbpx

AI can make high-definition fake videos from just a simple sketch

A fake is only as good as it looks. But while forging a counterfeit handbag or watch takes time and effort, churning out fake videos has become surprisingly easy.

A new system can turn a few simple animated line drawings into realistic fake clips in high definition. The software is open source, meaning that it is available to anyone – and it has reignited concerns that such tools could be used to warp our perception of the world.

The development of full artificial intelligence could spell the end of the human race.

The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.

Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.

Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.
— Stephen Hawking

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which cannot be controlled.The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

Background

By 2014, both physicist Stephen Hawking and business magnate Elon Musk had publicly voiced the opinion that superhuman artificial intelligence could provide incalculable benefits, but could also end the human race if deployed incautiously (see Existential risk from advanced artificial intelligence). Hawking and Musk both sit on the scientific advisory board for the Future of Life Institute, an organization working to “mitigate existential risks facing humanity”. The institute drafted an open letter directed to the broader AI research community,and circulated it to the attendees of its first conference in Puerto Rico during the first weekend of 2015.The letter was made public on January 12.

Enquiry