未来智库 > 人工智能论文 > 【人工智能莫扎特:机器与艺术的火花】
    Sometime in the coming decades, an external system that collects and analyzes endless streams of biometric data will probably be able to understand what’s going on in my body and in my brain much better than me. Such a system will transform politics and economics by allowing governments and corporations to predict and manipulate human desires. What will it do to art? Will art remain humanity’s last line of defense against the rise of the all-knowing algorithms?1
    In the modern world art is usually associated with human emotions. We tend to think that artists are channeling internal psychological forces, and that the whole purpose of art is to connect us with our emotions or to inspire in us some new feeling. Consequently, when we come to evaluate art, we tend to judge it by its emotional impact and to believe that beauty is in the eye of the beholder2.
    This view of art developed during the Romantic period in the 19th century, and came to maturity exactly a century ago, when in 1917 Marcel Duchamp purchased an ordinary mass-produced urinal, declared it a work of art, named it “Fountain,”3 signed it, and submitted it to an art exhibition. In countless classrooms across the world, first-year art students are shown an image of Duchamp’s“Fountain,” and at a sign from the teacher all hell breaks loose4. It is art! No it isn’t! Yes it is! No way!
    After letting the students release some steam, the teacher focuses the discussion by asking “What exactly is art? And how do we determine whether something is a work of art or not?” After a few more minutes of back and forth the teacher steers the class in the right direction: “Art is anything people think is art, and beauty is in the eye of the beholder.” If people think that a urinal is a beautiful work of art―then it is.
    In 1952, the composer John Cage5 outdid Duchamp by creating “4’33”.” This piece, originally composed for a piano but today also played by full symphonic orchestras6, consists of 4 minutes and 33 seconds during which no instrument plays anything. The piece encourages the audience to observe their inner experiences in order to examine what music is, what we expect of it, and how music differs from the random noises of everyday life. The message is that it is our own expectations and emotions that define music and that separate art from noise.
         If art is defined by human emotions, what might happen once external algorithms are able to understand and manipulate human emotions better than Shakespeare, Picasso or Lennon?7 After all, emotions are not some mystical phenomenon―they are a biochemical process. Hence, given enough biometric data and enough computing power, it might be possible to hack love, hate, boredom and joy.
    In the not-too-distant future, a machine-learning8 algorithm could analyze the biometric data streaming from sensors on and inside your body, determine your personality type and your changing moods, and calculate the emotional impact that a particular song―or even a particular musical key―is likely to have on you.
    Of all forms of art, music is probably the most susceptible to Big Data9 analysis, because both inputs and outputs lend themselves to mathematical depiction. The inputs are the mathematical patterns of soundwaves, and the outputs are the electrochemical patterns of neural storms. Allow a learning machine to go over millions of musical experiences, and it will learn how particular inputs result in particular outputs.
    The idea of computers composing music is hardly new. David Cope, a musicology professor at the University of California in Santa Cruz, created a computer program called EMI (Experiments in Musical Intelligence), which specialized in imitating the style of Johann Sebastian Bach.10 In a public showdown at the University of Oregon, an audience of university students and professors listened to three pieces―one a genuine Bach, another produced by EMI and a third composed by a local musicology professor, Steve Larson. The audience was then asked to vote on who composed which piece. The result? The audience thought that EMI’s piece was genuine Bach, that Bach’s piece was composed by Larson, and that Larson’s piece was produced by a computer.
    Hence in the long run, algorithms may learn how to compose entire tunes, playing on11 human emotions as if they were a piano keyboard. Using your personal biometric data the algorithms could even produce personalized melodies, which you alone in the entire world would appreciate.
    It is often said that people connect with art because they find themselves in it. If art is really about inspiring(or manipulating) human emotions, few if any human musicians will have a chance of competing with such an algorithm, because they cannot match it in understanding the chief instrument they are playing on: the human biochemical system.
         Will this result in great art? That depends on the definition of art. If beauty is indeed in the ears of the listener, then biometric algorithms stand a chance of producing the best art in history. If art is about something deeper than human emotions, and should express a truth beyond our biochemical vibrations, biometric algorithms might not make very good artists. But nor would most humans. In order to enter the art market, algorithms won’t have to begin by straight away surpassing Beethoven. It is enough if they outperform Justin Bieber12.
    未来几十年间,或许会出现一种外部智能系统,能够收集、分析源源不竭的生物数据流,能够比我更了解我的身体和大脑是如何运作的。有了这个系统,政府和企业就能预测、操纵民众的欲望,从而彻底改变政治、经济的面貌。那么,它会对艺术产生什么影响呢?面对全知算法的日益兴起,艺术能否成为人性的最后一道防线?
    在现代社会,艺术常常与人的情感密切相关。我们倾向于认为,艺术家是在传递内在的精神力量,艺术宗旨说到底在于打破自我与情感之间的隔阂,或在于激发我们产生新的情愫。于是乎,在评判艺术价值的时候,我们往往以感染力作为标准,相信美因人而异,仁者见仁,智者见智。
    这种艺术观源起19世纪浪漫主义时期,于一个世纪前走向成熟――1917年,马塞尔・杜尚购买了一个普普通通批量生产的小便斗,称其为艺术品,以《泉》命名并署名,然后送交参展。从此以后,世界各地无数的艺术系课堂都会向其新生展示这个《泉》的图片。只要教师稍加示意,整个教室便会喧闹沸腾起来:这是艺术!不,这才不是艺术!是艺术!绝不是!
    任由学生们激烈争论一番之后,教师抛出问题让他们重点讨论――“艺术究竟是什么?我们如何判定一个东西是不是艺术品?”持不同观点的各方你来我往,又过了几分钟,教师才逐渐将�论引导到正确的方向上:“人们认为什么东西是艺术,它就是艺术。而且,美也是因人而异的。”若人们认为小便斗是一件美丽的艺术品,那便是如此。
    1952年,作曲家约翰・凯奇创作了乐曲《4分33秒》,艺术成就赶超杜尚。这一曲目总共持续4分33秒,虽然原本是为钢琴演奏而作,但如今交响乐团也会演奏。演奏期间,所有的乐器均不发声。此曲鼓励观众审视自我的内在体验,以思考何为音乐、对音乐我们有何期待以及音乐与日常生活中乱七八糟的噪音有何区别等问题。它想表达的是:界定何为音乐,将音乐和噪音区分开来的,正是我们自身的期望和情感。
    如果说艺术依赖于人类情感进行界定,那么一旦外部算法比莎士比亚、毕加索或列侬更能理解和操纵人类情感,未来等待我们的将会是怎样的世界?说到底,人类情感并非某种神秘现象,只是生物化学过程罢了。因此,倘若有了足够多的生物统计数据、足够强大的计算能力,想要操控人的爱、恨、无聊以及快乐的情绪,也不是不可能的事。
    在不远的未来,机器学习算法将能够分析你身上或体内的感应器所传导出来的生物统计数据流,判断你的性格特征和你不断变化的情绪,由此计算出某一特定的歌曲――甚或是某一特定的音调――可能会对你产生怎样的情感触动。
    在所有的艺术形式当中,音乐很可能是最容易受到大数据分析影响的一种,因为音乐的输入和输出完全可以用数学进行描述。输入是声波的数学模式;输出是神经风暴的电化学模式。让一台机器研究上百万次人类听音乐时的体验,它将会知道某一类特定的输入如何导致某种特定的输出。
    计算机谱曲算不上什么新鲜的想法。加利福尼亚大学圣克鲁斯分校的音乐学教授大卫・科普就设计过一个名为“音乐智能实验(EMI)”的计算机程序,专门模仿约翰・塞巴斯蒂安・巴赫的音乐风格。在俄勒冈大学进行的一场公开比赛中,一群大学生和教授现场聆听了三首曲目――其中,一首是真正的巴赫作品,一首是EMI创作的,第三首是当地音乐学教授史蒂夫・拉森所谱。演奏结束后,听众投票选择是谁创作了哪一首曲目。结果呢?听众认为,EMI创作的曲目是真正的巴赫作品,巴赫的作品出自拉森之手,而拉森的谱曲则是计算机创作的。
    因此,从长远来看,算法是有可能学会如何创作出完整音乐曲目的,正如敲击钢琴琴键一般,它也能“敲击”人的情绪。借助你个人的生物统计数据,这种算法甚至能创作出个性化的旋律,全世界独你一人懂得欣赏。
    常言道,人之所以与艺术相通,是因为人在艺术中找到了自我。倘若艺术的关键切切实实在于激发(换句话说,操纵)人类的情感,那么,应该没有几个人类音乐家能跟这种算法一较高下。因为在理解他们所“敲击”的主要乐器――即人类的生化系统――方面,他们根本就不是算法的对手。
    算法能否创作出伟大的艺术作品?这取决于艺术的定义。如果天籁之音真的只依赖于听众的耳朵,那么生物算法就有可能创作出历史上最美妙的艺术作品。如果艺术的内涵不止步于人类的情感,而应该表达出一种超越人类生物化学反应的真理,生物统计算法则可能算不上是杰出的艺术家。不过,大多数人类也无法达到此种境界。进入艺术市场,算法不必非得一登场就超越贝多芬,如果能胜过贾斯汀・比伯也算不错了。
转载请注明来源。原文地址:https://www.7428.cn/vipzj17173/
 与本篇相关的热门内容: