top of page

How intelligent is artificial intelligence?




How smart is artificial intelligence and how fast is it advancing?


In a recent post I mentioned machine learning research that taught an algorithm to master 40-year-old video games.


If Atari 1982 video games are the peak of 2020 research, how hard is it to train an algorithm to play Pong, which came out in 1972? Much, much easier. Machine learning can master the game in about 250 lines of code. In fact, Pong is one of the most popular ways of teaching reinforcement learning theory and practice.


So why can a newbie programmer master Pong which is only ten years younger than games which requires cutting-edge AI? This answer to this question helps explain why the public – and many journalists – have a misplaced faith in the AI revolution.


The leap in videogame technology from one decade to the next was made possible by Moore’s Law, a simple formula where computer chip processing power doubles every year. This enabled programmers and device manufacturers to release new consoles regularly from the 1970s onwards.


But such exponential leaps in technology are rare and should not be applied to AI, as former head of the Computer Science and Artificial Intelligence lab at MIT, Rodney Brooks, explains in an interview with OneZero:


“It’s a unique event in history that computation got so much better, on a regular basis, for 50 years. Most people, if you’re thinking about the future, apply Moore’s Law to every technology, which is totally wrong. Moore’s Law was unique in why it could happen, and the whole misunderstanding of that has distorted our view of how things will change in the future.”


Constraints on creative writing


Similar misconceptions occur in other creative fields including writing. Here, Natural Language Generation (NLG) is good at automating document production based on complex data. It is used widely in the financial and pharmaceutical where reports and analysis are an essential part of the business workflow. In journalism it has a role to play writing short news articles based on structured data including financial, sports and election results.


But there’s an enormous gap between data-based automation and machine learning that imitates creative writing and human conversation. This leads to headlines such as, “Will AI ever write an award-winning novel” which only adds to this confusion (answer, not in this lifetime).


That’s not to dismiss advances in NLG deep-learning such as GPT-2 or Microsoft’s Alan Truring project. Google also recently showcased Meena, a chatbot that can chat about (almost) anything. Impressive if you are trying to pass the Truring test, but far less satisfactory when trying to write a full-length novel.


In fact, the best example I can find of machine learning NLG generating full-length ‘original’ copy, is the production of simple student essays, very badly written.


The same with music. OpenAI today announced Jukebox, an open source AI system for generating music, including songs and lyrics based on popular genres. It is impressive compared with what came before, but the results still sound like mid-level performers played back through a poorly tuned AM receiver.


That’s not to say that machine learning composition doesn’t have its uses. It is just that it needs competent intervention (‘a human in the loop’), to achieve results that match the efforts of commercial artists and producers.


Human interventions normally occur at two levels. Firstly, curating the data with which to train the algorithm and second, sifting through the output – melodies and orchestrations – and assembling them into a structured composition.


In the right hands, the results are highly listenable. Artists such as Skygge use dedicated machine learning platforms to create original compositions and orchestrate acapella recordings. The composer behind the name, Benoit Carré, was also responsible for a passable impression of The Beatles with the AI-generated track, Daddy’s Car.


This is what machine learning does well when it comes to creative, rather than scientific outputs. If it has plenty of data to work with, you can train an algorithm to toss out hundreds of sentences, melodies or images. Success depends on building the right training set, and the ability to spot the best units of content and weave them into a finished product.

Stuck in the 1980s


Games, music, writing. If there’s a pattern emerging, it is that machine learning is useful when it doesn’t have to adhere to the rules. If it comes up with a clunky sentence or an uninspiring tune it doesn’t matter as long as just some of what it generates inspires a human curator. It’s less suitable for domains such as law, ethics and healthcare, where strict adherence to tradition and precedent are a priority (which makes these areas suitable for automation).


There’s a paradox at work here. From breaking news to games development and music, creatives are training machines to generate ideas that inspire original articles, songs and first-person-shooters. This is where machine learning excels in the creative fields.


On the other, AI is way behind humanity when it comes to producing a song or long-form writing entirely by itself. Metaphors are dangerous, but the examples in this article: Atari video games, AM radio, and badly written student essays (in my case) place us firmly in the 1980s. As designers, journalists and composers advance into the 2020s, autonomous AI has a great deal of catching up to do.

9 views

Recent Posts

See All
Ps-Website-Logo-Icon-Dark-Blue.png

Feel the connection?

bottom of page