Artificial intelligence is presently busy composing music and making movie trailers, so what is it that AI cannot do?
The short answer to that is, of course, no one really knows for sure. When it comes to mimicking human thought and actions – in terms of decision-making at least – there doesn’t seem to be a limit.
The only limitations to AI is the real world – hardware engineers simply cannot make the physical objects that AI is more than capable of running.
So, for example, the AI systems are more than capable to mimicking the movement of humans in a virtual environment inside a computer, but even the best humanoid robots that have so far been built look and move in a way that lacks the smoothness of real humans. That is, they don’t seem natural enough.
Some people say that it’s just a matter of time before the hardware catches up with software, and that might give humans just enough time to stave off the total takeover of the world by AI-driven robots.
Right now, among the most novel things AI systems such as IBM’s Watson can do is produce trailers for movies such as Morgan, the science-fiction film directed by Luke Scott.
Watson also autonomously produced 90-second highlight reels after viewing entire tennis matches at Wimbledon earlier this year.
Also, AI can make music.
Artist Taryn Southern – who is presumably human – has released a new single which was almost entirely composed by AI algorithms, according to a new report on CNN.com.
“In a funny way, I have a new song-writing partner who doesn’t get tired and has this endless knowledge of music making,” Southern told CNN.
It’s not the first time AI has been used to at least assist in making music, but lately, even Google’s getting in on the act.
The search giant launched a special website to research the whole idea of art and music created by AI.
Google has a machine learning framework for machine learning called TensorFlow, which is used by programmers to create AI applications. Under the umbrella of TensorFlow, Google has now introduced something called Magenta.
Through Magenta, Google will launch machine learning models and other AI tools which, essentially, will be able to create art and music or at least help in their creation.
It’s all part of a project managed by the so-called Google Brain team, which says: “Magenta has two goals. First, it’s a research project to advance the state of the art in machine intelligence for music and art generation. Machine learning has already been used extensively to understand content, as in speech recognition or translation. With Magenta, we want to explore the other side—developing algorithms that can learn how to generate art and music, potentially creating compelling and artistic content on their own.
“Second, Magenta is an attempt to build a community of artists, coders and machine learning researchers. The core Magenta team will build open-source infrastructure around TensorFlow for making art and music. We’ll start with audio and video support, tools for working with formats like MIDI, and platforms that help artists connect to machine learning models. For example, we want to make it super simple to play music along with a Magenta performance model.”
The idea of a computer programme composing music is not actually new. In fact, it was successfully demonstrated on national television by a young Ray Kurzweil back in the 1960s.
And, coincidentally perhaps, Kurzweil currently works for Google in the area of machine learning and language processing.