Computational creativity is a multidisciplinary area of research, which involves the fields of artificial intelligence, cognitive psychology, philosophy and art. Apart from developing theories about creativity, people within the field have tried to synthesize numerous types of human output, including for instance music, humor, writing and problem solving. Instead of trying to go through all of these endeavors I choose to elaborate a bit on the concept of creativity and will then provide a couple of examples from recent developments within the field.
The concept of creativity, like so many abstract concepts, is connected to how human activity occurs to people and is therefore part of our worldview. Naturally then, since the worldviews of people belonging to different cultures across the world (and amongst individuals in the cultures) are very different and have shifted during history, there are numerous variants of the concept of creativity available.
Discovering or creating?
The most common broad definition of creativity in the contemporary western world is that the result of a creative act is something original and valuable. Note that the distinction between the implications of this definition and those related to the definitions of “discovery” or “exploring” isn’t easy to discern. For every creation it can be argued that the possibility for the creation was always there, waiting to be discovered. Interestingly enough the ancient Greeks saw art as discovery, not creativity, for example.
What is creation and what is exploring? That is a pure philosophical question at its’ core. It relates to what we imagine is “really out there”. As humans we are always stuck with our interpretations and inner creations. Our brain excels at image processing and language processing/generation by which it creates meaning for us. What we see and experience is not what is out there, it’s created within ourselves with something out there as an input. Therefore there is nothing in our experiences, which isn’t subject to interpretation and creation, nothing that doesn’t come with our limited association spaces and reference frames.
Another relevant concept with regards to creativity is that of free will contra determinism. If there is no such thing as free will it can be argued that there isn’t any creation happening either. The universe then just sort of unfolds in a brutal manner and we can only have the false impression of creating.
In the spirit of the above I would say that there is no creativity that isn’t exploratory and there is no exploring that isn’t creative in some sense. That makes it very natural, and so much more exciting, to switch to the next topic of this post.
Being creative by trial and error
Reinforcement learning is a term, which refers to a certain set of algorithms within machine learning/artificial intelligence, which are concerned with learning policies. Given a number of possible actions and without a need for prior knowledge about the world, the agent builds a policy for succeeding in the world by taking actions in a trial and error manner. The learning is possible due to negative or positive feedback during this process. The feedback let’s the algorithm know whether the actions taken were beneficial or not, which might lead to an updated policy. The inspiration for reinforcement learning (as is the case for many models within machine learning) came from the fact that humans and other animals seem to be able to learn in this manner.
By using reinforcement learning and deep neural networks Google owned company Deepmind was able to achieve something quite astonishing (this received a lot of attention in the news when it was revealed in the beginning of 2015). Given only the graphics (the colors of the pixels on the screen), the controls and the feedback (scoring, dying and so on) their algorithm learned how to play a bunch of old Atari games. The games were very different from each other in nature so every time the algorithm was subjected to a new game it was like being subjected to a new world. Note that this setting is very different from what traditional artificial intelligence algorithms work with. When taking the classic approach, knowledge about the game is available to the algorithm from the beginning in the form of a database containing different game states and how to value them. Such an algorithm is dedicated to play one specific game and will be useless for anything else. In contrast, algorithms using reinforcement learning start with nothing and build up their knowledge about the world automatically.
Here is a video of Deepmind’s algorithm learning how to play the game Breakout:
It is impressive enough that the algorithm, when playing some of the games, performed on a superhuman level after just a few hours of training. The most surprising fact however, was that it came up with ways to “solve” games, i.e. novel strategies which utilize weaknesses of the games in order to succeed very fast. These were strategies that the creators of the algorithm hadn’t thought of themselves. If a human came up with such a strategy while playing I’d say very few people would consider it strange to compliment the player by calling him or her creative. For this reason, I consider today’s state of the art reinforcement learning algorithms to be capable of at least some basic level of creativity.
Generating output after learning a pattern
Many attempts within computational creativity fall within this category. For a long period of time different kinds of machine learning algorithms have been used successfully in order to learn patterns automatically from data. Some of the models involved are probabilistic, which means that they can easily be used to generate an arbitrary number of outputs which all are new but exhibit the same pattern that was previously learned. When using the models as generators like this, it’s different random seeds that provide different initial states and state transitions, which leads to completely new outputs. Common examples of models used for this purpose are Markov models and different types of neural networks, for instance recurrent neural networks. Many people would claim that these systems aren’t creative since they are not able to “think outside the box”. But how much outside the box does the typical composer normally think when he or she writes a musical piece? Most of the work seems to be related to forging new melodies and harmonies that are very similar to those who already exist. Nevertheless there has got to be some more groundbreaking creativity involved in music composition every now and then, otherwise music wouldn’t evolve over time.
An exciting example of this type of creative system can be found on the deep learning researcher Karpathy’s blog. It’s a post about generating Shakespeare gibberish using a recurrent neural network. The whole Shakespeare corpus is available online and Karpathy uses it to train his model and then to generate similar text character by character. He also provides code that enables anyone to train a similar model based on any corpus of text. Here is a link to the post.
Here is Google research’s post about “dreaming” neural networks which is more of a visualization tool but still somewhat relevant in this context.
Here is a free piece of software that can generate jazz solos using a probabilistic grammar.
Creating new paradigms
Every now and then humans seem to be capable of a type of creativity, which is transformational, and that results in new paradigms and worldviews. Good examples of this is Einstein’s theory of relativity, the notion of human rights and abstract art (the concept, not necessarily individual pieces of art). Common traits of these creations are that they seem to have been created from nothing and that they result in new contexts which can’t be understood in terms of language from the old ones. I have not yet seen this kind of creativity demonstrated by computers and my feeling is that it is an AI-complete problem. In order to create a new worldview it seems like one would need a perspective to begin with. This indicates that a rather complete cognitive system would be necessary. “AI-complete” is a term within artificial intelligence, which describes problems that in order for them to be solved would require very general intelligences, perhaps on par with human intelligence. I think that we are heading in that direction. The aforementioned company Deepmind for instance, is creating cognitive systems that much like the human brain contain integrated processing of video, sound and language. They have stated openly that their goal is “solving intelligence”. At which point we start to experience that systems like these create totally new contexts or paradigms is very hard to say. And the question is whether they soon thereafter will create too complex constructs for us to even grasp them.
My view is that cognitive systems will change life, as we know it long before they are capable of coming up with new paradigms. There is already an alliance between humans and computers as of today and it will be strengthened immensely by the possibility to interact using spoken language instead of through explicit instructions. This development has already begun with tools like Apple’s Siri and Amazon’s Echo and ten years from now these will be much more proficient in hearing what we say and in understanding what we mean. Computers will more and more take on the role as extremely capable assistants, complementing us in exactly the type of challenges that our brains are weak at. We will be able to ask our computers complex questions and enter into creative dialogues with them. These will naturally begin with an initial question or command which the computer will interpret in a certain, potentially erroneous way (since our natural languages are ambiguous at times). But refining the statement and telling the computer what it got wrong might provide us with what we sought for in the end. Possibly even with something totally unexpected and more valuable, created in part by the machine. I expect breakthroughs in all scientific areas due to this.